You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/ROOT/pages/getting_started.adoc
+33-21Lines changed: 33 additions & 21 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ One of the best ways of getting started with a new platform is to try it out. An
4
4
5
5
== About this guide
6
6
7
-
Firstly, let’s cover whether this Getting started guide is right for you. This is intended as a learning tool to discover more about Stackable, its deployment and architecture.
7
+
Firstly, let’s cover whether this *Getting Started* guide is right for you. This is intended as a learning tool to discover more about Stackable, its deployment and architecture.
8
8
9
9
* If you just want to get up and running quickly there is a quickstart script that will install services on a single node available in the https://github.com/stackabletech/stackable-utils[stackable-utils] repository on GitHub.
10
10
* If you want to build a production cluster then this is not for you. This tutorial is to familiarise you with the Stackable architecture and is not a guide for building robust clusters.
@@ -34,9 +34,12 @@ Below is a table for recommended sizing for this tutorial. Bear in mind that the
34
34
35
35
== Installing Kubernetes
36
36
37
-
Stackable’s control plane is built around Kubernetes. We’ll be deploying services not as containers but as regular services controlled by systemd. Stackable Agent is a custom kubelet and bridges the worlds between Kubernetes and native deployment. For this walkthrough we’ll be using K3s, which offers a very quick and easy way to bootstrap your Kubernetes infrastructure. On your controller node run the following command as root to install K3s:
37
+
Stackable’s control plane is built around Kubernetes. We’ll be deploying services not as containers but as regular services controlled by systemd. Stackable Agent is a custom kubelet and bridges the worlds between Kubernetes and native deployment. For this walkthrough we’ll be using K3s, which offers a very quick and easy way to bootstrap your Kubernetes infrastructure.
38
38
39
-
`curl -sfL https://get.k3s.io | sh -`
39
+
On your *controller* node run the following commands as root to install K3s:
40
+
41
+
apt-get install curl
42
+
curl -sfL https://get.k3s.io | sh -
40
43
41
44
So long as your VM has an Internet connection it will download and automatically configure a simple Kubernetes environment.
42
45
@@ -65,7 +68,9 @@ To check if everything worked as expected you can use `kubectl cluster-info` to
65
68
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
66
69
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
67
70
68
-
Now that we have Kubernetes running on the controller node we need to distribute the configuration to the worker nodes. You’ll find a configuration file on your controller node in `/etc/rancher/k3s/k3s.yaml`. Edit this file and set the property named clusters->server to the address of your controller node (by default this is set to 127.0.0.1). Once you’ve done this you can distribute the file out to the cluster. Copy this file to `/root/.kube/config` on each of the nodes (controller and workers). One of the default locations for the Kubernetes configuration file is in the user's home directory and the Stackable Agent will check here by default when running as root as required.
71
+
Now that we have Kubernetes running on the controller node we need to distribute the configuration to the *worker* nodes. You’ll find a configuration file on your controller node in `/etc/rancher/k3s/k3s.yaml`.
72
+
73
+
Edit this file and set the property named clusters->server to the address of your controller node (by default this is set to 127.0.0.1). Once you’ve done this you can distribute the file out to the cluster. Copy this file to `/root/.kube/config` on each of the nodes (*controller* and *workers*). One of the default locations for the Kubernetes configuration file is in the user's home directory and the Stackable Agent will check here by default when running as root as required.
69
74
70
75
== Installing Stackable
71
76
@@ -106,7 +111,7 @@ In order to allow creating a repository, you’ll have to create the CRD for rep
106
111
shortNames:
107
112
- repo
108
113
109
-
You can choose whatever way is most convenient for you to apply this CRD to your cluster. You can use `kubectl apply -f` to read the CRD from a file or from stdin as in this example:
114
+
You can choose whatever way is most convenient for you to apply this CRD to your cluster controller. You can use `kubectl apply -f` to read the CRD from a file or from stdin as in this example:
110
115
111
116
cat <<EOF | kubectl apply -f -
112
117
apiVersion: apiextensions.k8s.io/v1
@@ -156,16 +161,16 @@ You can either host your own repository or specify the Stackable public reposito
156
161
157
162
=== Installing Stackable CRDs
158
163
159
-
Kubernetes uses custom resource descriptors or CRDs to define the resources that will be under its control. We firstly need to load the CRDs for the Stackable services before it will be able to deploy them to the cluster. We can do this using kubectl again, just as we did to install the CRD for the Stackable repository. Kubectl can read from stdin, so we’ll use cURL to download the CRDs we need and pipe them to kubectl.
164
+
Kubernetes uses custom resource descriptors or CRDs to define the resources that will be under its control. We firstly need to load the CRDs for the Stackable services before it will be able to deploy them to the cluster. We can do this using kubectl again, just as we did to install the CRD for the Stackable repository. Kubectl can read from stdin, so on the *controller*, use cURL to download the CRDs we need and pipe them to kubectl.
@@ -176,14 +181,14 @@ Check the output for each command. You should see a message that the CRD was suc
176
181
177
182
=== Configuring the Stackable OS package repository
178
183
179
-
You will need to configure the Stackable OS package repository on the worker nodes. We’ll also take the opportunity to install OpenJDK Java 11 as well as this will be required by the Stackable services we will be running.
184
+
You will need to configure the Stackable OS package repository on all nodes. We’ll also take the opportunity to install OpenJDK Java 11 as well as this will be required by the Stackable services we will be running.
180
185
181
186
Stackable supports running agents on Debian 10 "Buster", CentOS 7, and CentOS 8.
@@ -198,7 +203,7 @@ Stackable supports running agents on Debian 10 "Buster", CentOS 7, and CentOS 8.
198
203
/usr/bin/yum clean all
199
204
200
205
=== Installing Stackable Operators
201
-
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the controller node. Remember to install the Stackable OS package repo before installing the operators as described above.
206
+
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the *controller* node. Remember to install the Stackable OS package repo before installing the operators as described above.
202
207
203
208
==== Debian and Ubuntu
204
209
apt-get install stackable-zookeeper-operator \
@@ -232,9 +237,9 @@ You can use `systemctl status <service-name>` to check whether the services have
232
237
233
238
234
239
=== Installing Stackable Agent
235
-
On each of the worker nodes you’ll need to install Stackable Agent, which runs a custom kubelet that can be used to launch non-containerised applications using systemd. If this doesn’t make a lot of sense to you, don’t worry. What this means is that you can run regular Linux services using the Kubernetes control plane. This makes sense for example if you wish to run a hybrid deployment with a mix of bare metal and containerised services and manage them all with one framework.
240
+
On each of the *worker* nodes you’ll need to install Stackable Agent, which runs a custom kubelet that can be used to launch non-containerised applications using systemd. If this doesn’t make a lot of sense to you, don’t worry. What this means is that you can run regular Linux services using the Kubernetes control plane. This makes sense for example if you wish to run a hybrid deployment with a mix of bare metal and containerised services and manage them all with one framework.
236
241
237
-
NOTE: Don’t install the agent onto the controller node as it is already has the K3s kubelet running and this would cause a clash. Stackable Agent should only be deployed on the worker nodes.
242
+
NOTE: Don’t install the agent onto the *controller* node as it is already has the K3s kubelet running and this would cause a clash. Stackable Agent should only be deployed on the *worker* nodes.
238
243
239
244
==== Debian and Ubuntu
240
245
apt-get install stackable-agent
@@ -262,7 +267,11 @@ During the first start of the agent, it will perform some bootstrapping tasks, m
262
267
263
268
Aug 10 12:53:48 node1 stackable-agent[5208]: [2021-08-10T12:53:48Z INFO stackable_agent] Successfully bootstrapped TLS certificate: TLS certificate requires manual approval. Run kubectl certificate approve node1.stackable-tls
264
269
265
-
You will need to manually approve that certificate requests created by the agents before the agent can start. You can do this by running `kubectl certificate approve <agent-fqdn>-tls` on the controller node after starting the agent.
270
+
These certificate signing requests can be viewed on the *controller* with
271
+
272
+
kubectl get csr
273
+
274
+
You will need to manually approve that certificate requests created by the agents before the agent can start. You can do this by running `kubectl certificate approve <agent-fqdn>-tls` on the *controller* node after starting the agent.
@@ -276,6 +285,9 @@ Once the nodes have been registered and had their certificates signed they will
276
285
node3.stackable Ready <none> 5s 0.7.0
277
286
node1.stackable Ready <none> 3m43s 0.7.0
278
287
288
+
NOTE: If there is a failure to start the agent on any of the worker nodes, then `journalctl -fu stackable-agent` for further information.
289
+
290
+
NOTE: If the failure is due to the presence of an existing CSR on the *controller* node, delete the offending CSR with `kubectl delete csr <CSR Name>`. The names of the CSRs can be discovered with `kubectl get csr`.
279
291
280
292
== Deploying Stackable Services
281
293
At this point you’ve successfully deployed the Stackable node infrastructure and are ready to deploy services to the cluster. To do this we provide service descriptions to Kubernetes for each of the services we wish to deploy.
@@ -307,7 +319,7 @@ We will deploy 3 Apache ZooKeeper instances to our cluster. This is a fairly typ
307
319
syncLimit: 2
308
320
EOF
309
321
310
-
NOTE: Debian will automatically bind the computer's hostname to 127.0.1.1, which causes ZooKeeper to only listen on the localhost interface. To prevent this, comment out the corresponding entry from /etc/hosts on each worker node.
322
+
NOTE: Debian will automatically bind the computer's hostname to 127.0.1.1, which causes ZooKeeper to only listen on the localhost interface. To prevent this, comment out the corresponding entry from /etc/hosts on each *worker* node.
311
323
312
324
=== Apache Kafka
313
325
We will deploy 3 Apache Kafka brokers, another typical deployment pattern for Kafka clusters. Note that Kafka depends on the ZooKeeper service and the zookeeperReference property below points to the namespace and name we gave to the ZooKeeper service deployed previously.
@@ -334,7 +346,7 @@ We will deploy 3 Apache Kafka brokers, another typical deployment pattern for Ka
334
346
config:
335
347
logDirs: "/tmp/kafka-logs"
336
348
metricsPort: 96
337
-
349
+
EOF
338
350
339
351
=== Apache NiFi
340
352
We will deploy 3 Apache servers NiFi. This might seem over the top for a tutorial cluster, but it's worth pointing out that the operator will cluster the 3 NiFi servers for us automatically.
@@ -387,7 +399,7 @@ If all has gone well then you will have successfully deployed a Stackable cluste
387
399
388
400
=== Apache ZooKeeper
389
401
390
-
Log onto one of your worker nodes and run the ZooKeeper CLI shell. Stackable stores the service software in /opt/stackable/packages, so you may wish to add this to your PATH environment variable.
402
+
Log onto one of your *worker* nodes and run the ZooKeeper CLI shell. Stackable stores the service software in /opt/stackable/packages, so you may wish to add this to your PATH environment variable.
@@ -402,7 +414,7 @@ To test Kafka we'll use the tool `kafkacat`.
402
414
403
415
sudo apt install kafkacat
404
416
405
-
With `kafkacat` installed we can log into one of the worker nodes query the metadata on the broker running on localhost.
417
+
With `kafkacat` installed we can log into one of the *worker* nodes query the metadata on the broker running on localhost.
406
418
407
419
user@node1:~$ kafkacat -b localhost -L
408
420
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
@@ -415,7 +427,7 @@ With `kafkacat` installed we can log into one of the worker nodes query the meta
415
427
We should see 3 brokers listed, showing that Stackable has successfully deployed the brokers as a cluster.
416
428
417
429
=== Apache NiFi
418
-
Apache NiFi provides a web interface and the easiest way to test it is to view this in a web browser. Browse to the address of one of your worker nodes on port 8080 e.f. http://node1.stackable:8080/nifi and you should see the NiFi Canvas.
430
+
Apache NiFi provides a web interface and the easiest way to test it is to view this in a web browser. Browse to the address of one of your *worker* nodes on port 8080 e.f. http://node1.stackable:8080/nifi and you should see the NiFi Canvas.
419
431
420
432
image:nifi_menu.png[The Apache NiFi web interface]
0 commit comments