Skip to content

Commit f5f383d

Browse files
lee-butcherJimvin
andauthored
Update 'Getting Started' to aid in clarity, and get around some sticking points (#114)
Co-authored-by: Jim Halfpenny <jim@source321.com>
1 parent 9a04e70 commit f5f383d

File tree

1 file changed

+33
-21
lines changed

1 file changed

+33
-21
lines changed

modules/ROOT/pages/getting_started.adoc

Lines changed: 33 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ One of the best ways of getting started with a new platform is to try it out. An
44

55
== About this guide
66

7-
Firstly, let’s cover whether this Getting started guide is right for you. This is intended as a learning tool to discover more about Stackable, its deployment and architecture.
7+
Firstly, let’s cover whether this *Getting Started* guide is right for you. This is intended as a learning tool to discover more about Stackable, its deployment and architecture.
88

99
* If you just want to get up and running quickly there is a quickstart script that will install services on a single node available in the https://github.com/stackabletech/stackable-utils[stackable-utils] repository on GitHub.
1010
* If you want to build a production cluster then this is not for you. This tutorial is to familiarise you with the Stackable architecture and is not a guide for building robust clusters.
@@ -34,9 +34,12 @@ Below is a table for recommended sizing for this tutorial. Bear in mind that the
3434

3535
== Installing Kubernetes
3636

37-
Stackable’s control plane is built around Kubernetes. We’ll be deploying services not as containers but as regular services controlled by systemd. Stackable Agent is a custom kubelet and bridges the worlds between Kubernetes and native deployment. For this walkthrough we’ll be using K3s, which offers a very quick and easy way to bootstrap your Kubernetes infrastructure. On your controller node run the following command as root to install K3s:
37+
Stackable’s control plane is built around Kubernetes. We’ll be deploying services not as containers but as regular services controlled by systemd. Stackable Agent is a custom kubelet and bridges the worlds between Kubernetes and native deployment. For this walkthrough we’ll be using K3s, which offers a very quick and easy way to bootstrap your Kubernetes infrastructure.
3838

39-
`curl -sfL https://get.k3s.io | sh -`
39+
On your *controller* node run the following commands as root to install K3s:
40+
41+
apt-get install curl
42+
curl -sfL https://get.k3s.io | sh -
4043

4144
So long as your VM has an Internet connection it will download and automatically configure a simple Kubernetes environment.
4245

@@ -65,7 +68,9 @@ To check if everything worked as expected you can use `kubectl cluster-info` to
6568
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
6669
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
6770

68-
Now that we have Kubernetes running on the controller node we need to distribute the configuration to the worker nodes. You’ll find a configuration file on your controller node in `/etc/rancher/k3s/k3s.yaml`. Edit this file and set the property named clusters->server to the address of your controller node (by default this is set to 127.0.0.1). Once you’ve done this you can distribute the file out to the cluster. Copy this file to `/root/.kube/config` on each of the nodes (controller and workers). One of the default locations for the Kubernetes configuration file is in the user's home directory and the Stackable Agent will check here by default when running as root as required.
71+
Now that we have Kubernetes running on the controller node we need to distribute the configuration to the *worker* nodes. You’ll find a configuration file on your controller node in `/etc/rancher/k3s/k3s.yaml`.
72+
73+
Edit this file and set the property named clusters->server to the address of your controller node (by default this is set to 127.0.0.1). Once you’ve done this you can distribute the file out to the cluster. Copy this file to `/root/.kube/config` on each of the nodes (*controller* and *workers*). One of the default locations for the Kubernetes configuration file is in the user's home directory and the Stackable Agent will check here by default when running as root as required.
6974

7075
== Installing Stackable
7176

@@ -106,7 +111,7 @@ In order to allow creating a repository, you’ll have to create the CRD for rep
106111
shortNames:
107112
- repo
108113

109-
You can choose whatever way is most convenient for you to apply this CRD to your cluster. You can use `kubectl apply -f` to read the CRD from a file or from stdin as in this example:
114+
You can choose whatever way is most convenient for you to apply this CRD to your cluster controller. You can use `kubectl apply -f` to read the CRD from a file or from stdin as in this example:
110115

111116
cat <<EOF | kubectl apply -f -
112117
apiVersion: apiextensions.k8s.io/v1
@@ -156,16 +161,16 @@ You can either host your own repository or specify the Stackable public reposito
156161

157162
=== Installing Stackable CRDs
158163

159-
Kubernetes uses custom resource descriptors or CRDs to define the resources that will be under its control. We firstly need to load the CRDs for the Stackable services before it will be able to deploy them to the cluster. We can do this using kubectl again, just as we did to install the CRD for the Stackable repository. Kubectl can read from stdin, so we’ll use cURL to download the CRDs we need and pipe them to kubectl.
164+
Kubernetes uses custom resource descriptors or CRDs to define the resources that will be under its control. We firstly need to load the CRDs for the Stackable services before it will be able to deploy them to the cluster. We can do this using kubectl again, just as we did to install the CRD for the Stackable repository. Kubectl can read from stdin, so on the *controller*, use cURL to download the CRDs we need and pipe them to kubectl.
160165

161166
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/zookeepercluster.crd.yaml
162-
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/stop.crd.yaml
163-
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/start.crd.yaml
164167
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/restart.crd.yaml
168+
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/start.crd.yaml
169+
kubectl apply -f https://raw.githubusercontent.com/stackabletech/zookeeper-operator/main/deploy/crd/stop.crd.yaml
165170
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/kafkacluster.crd.yaml
166-
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/stop.crd.yaml
167-
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/start.crd.yaml
168171
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/restart.crd.yaml
172+
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/start.crd.yaml
173+
kubectl apply -f https://raw.githubusercontent.com/stackabletech/kafka-operator/main/deploy/crd/stop.crd.yaml
169174
kubectl apply -f https://raw.githubusercontent.com/stackabletech/agent/main/deploy/crd/repository.crd.yaml
170175
kubectl apply -f https://raw.githubusercontent.com/stackabletech/nifi-operator/main/deploy/crd/nificluster.crd.yaml
171176

@@ -176,14 +181,14 @@ Check the output for each command. You should see a message that the CRD was suc
176181

177182
=== Configuring the Stackable OS package repository
178183

179-
You will need to configure the Stackable OS package repository on the worker nodes. We’ll also take the opportunity to install OpenJDK Java 11 as well as this will be required by the Stackable services we will be running.
184+
You will need to configure the Stackable OS package repository on all nodes. We’ll also take the opportunity to install OpenJDK Java 11 as well as this will be required by the Stackable services we will be running.
180185

181186
Stackable supports running agents on Debian 10 "Buster", CentOS 7, and CentOS 8.
182187

183188
==== Debian and Ubuntu
184189
apt-get install gnupg openjdk-11-jdk curl
185190
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 16dd12f5c7a6d76a
186-
echo "deb https://repo.stackable.tech/repository/deb-dev buster main" > /etc/apt/sources.list.d/stackable.list
191+
echo "deb [arch=amd64] https://repo.stackable.tech/repository/deb-dev buster main" > /etc/apt/sources.list.d/stackable.list
187192
apt-get update
188193

189194
==== Red Hat and CentOS
@@ -198,7 +203,7 @@ Stackable supports running agents on Debian 10 "Buster", CentOS 7, and CentOS 8.
198203
/usr/bin/yum clean all
199204

200205
=== Installing Stackable Operators
201-
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the controller node. Remember to install the Stackable OS package repo before installing the operators as described above.
206+
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the *controller* node. Remember to install the Stackable OS package repo before installing the operators as described above.
202207

203208
==== Debian and Ubuntu
204209
apt-get install stackable-zookeeper-operator \
@@ -232,9 +237,9 @@ You can use `systemctl status <service-name>` to check whether the services have
232237

233238

234239
=== Installing Stackable Agent
235-
On each of the worker nodes you’ll need to install Stackable Agent, which runs a custom kubelet that can be used to launch non-containerised applications using systemd. If this doesn’t make a lot of sense to you, don’t worry. What this means is that you can run regular Linux services using the Kubernetes control plane. This makes sense for example if you wish to run a hybrid deployment with a mix of bare metal and containerised services and manage them all with one framework.
240+
On each of the *worker* nodes you’ll need to install Stackable Agent, which runs a custom kubelet that can be used to launch non-containerised applications using systemd. If this doesn’t make a lot of sense to you, don’t worry. What this means is that you can run regular Linux services using the Kubernetes control plane. This makes sense for example if you wish to run a hybrid deployment with a mix of bare metal and containerised services and manage them all with one framework.
236241

237-
NOTE: Don’t install the agent onto the controller node as it is already has the K3s kubelet running and this would cause a clash. Stackable Agent should only be deployed on the worker nodes.
242+
NOTE: Don’t install the agent onto the *controller* node as it is already has the K3s kubelet running and this would cause a clash. Stackable Agent should only be deployed on the *worker* nodes.
238243

239244
==== Debian and Ubuntu
240245
apt-get install stackable-agent
@@ -262,7 +267,11 @@ During the first start of the agent, it will perform some bootstrapping tasks, m
262267

263268
Aug 10 12:53:48 node1 stackable-agent[5208]: [2021-08-10T12:53:48Z INFO stackable_agent] Successfully bootstrapped TLS certificate: TLS certificate requires manual approval. Run kubectl certificate approve node1.stackable-tls
264269

265-
You will need to manually approve that certificate requests created by the agents before the agent can start. You can do this by running `kubectl certificate approve <agent-fqdn>-tls` on the controller node after starting the agent.
270+
These certificate signing requests can be viewed on the *controller* with
271+
272+
kubectl get csr
273+
274+
You will need to manually approve that certificate requests created by the agents before the agent can start. You can do this by running `kubectl certificate approve <agent-fqdn>-tls` on the *controller* node after starting the agent.
266275

267276
root@kubernetes:~# kubectl certificate approve node1.stackable-tls
268277
certificatesigningrequest.certificates.k8s.io/node1.stackable-tls approved
@@ -276,6 +285,9 @@ Once the nodes have been registered and had their certificates signed they will
276285
node3.stackable Ready <none> 5s 0.7.0
277286
node1.stackable Ready <none> 3m43s 0.7.0
278287

288+
NOTE: If there is a failure to start the agent on any of the worker nodes, then `journalctl -fu stackable-agent` for further information.
289+
290+
NOTE: If the failure is due to the presence of an existing CSR on the *controller* node, delete the offending CSR with `kubectl delete csr <CSR Name>`. The names of the CSRs can be discovered with `kubectl get csr`.
279291

280292
== Deploying Stackable Services
281293
At this point you’ve successfully deployed the Stackable node infrastructure and are ready to deploy services to the cluster. To do this we provide service descriptions to Kubernetes for each of the services we wish to deploy.
@@ -307,7 +319,7 @@ We will deploy 3 Apache ZooKeeper instances to our cluster. This is a fairly typ
307319
syncLimit: 2
308320
EOF
309321

310-
NOTE: Debian will automatically bind the computer's hostname to 127.0.1.1, which causes ZooKeeper to only listen on the localhost interface. To prevent this, comment out the corresponding entry from /etc/hosts on each worker node.
322+
NOTE: Debian will automatically bind the computer's hostname to 127.0.1.1, which causes ZooKeeper to only listen on the localhost interface. To prevent this, comment out the corresponding entry from /etc/hosts on each *worker* node.
311323

312324
=== Apache Kafka
313325
We will deploy 3 Apache Kafka brokers, another typical deployment pattern for Kafka clusters. Note that Kafka depends on the ZooKeeper service and the zookeeperReference property below points to the namespace and name we gave to the ZooKeeper service deployed previously.
@@ -334,7 +346,7 @@ We will deploy 3 Apache Kafka brokers, another typical deployment pattern for Ka
334346
config:
335347
logDirs: "/tmp/kafka-logs"
336348
metricsPort: 96
337-
349+
EOF
338350

339351
=== Apache NiFi
340352
We will deploy 3 Apache servers NiFi. This might seem over the top for a tutorial cluster, but it's worth pointing out that the operator will cluster the 3 NiFi servers for us automatically.
@@ -387,7 +399,7 @@ If all has gone well then you will have successfully deployed a Stackable cluste
387399

388400
=== Apache ZooKeeper
389401

390-
Log onto one of your worker nodes and run the ZooKeeper CLI shell. Stackable stores the service software in /opt/stackable/packages, so you may wish to add this to your PATH environment variable.
402+
Log onto one of your *worker* nodes and run the ZooKeeper CLI shell. Stackable stores the service software in /opt/stackable/packages, so you may wish to add this to your PATH environment variable.
391403

392404
PATH=$PATH:/opt/stackable/packages/zookeeper-3.5.8/apache-zookeeper-3.5.8-bin/bin
393405
zkCli.sh
@@ -402,7 +414,7 @@ To test Kafka we'll use the tool `kafkacat`.
402414

403415
sudo apt install kafkacat
404416

405-
With `kafkacat` installed we can log into one of the worker nodes query the metadata on the broker running on localhost.
417+
With `kafkacat` installed we can log into one of the *worker* nodes query the metadata on the broker running on localhost.
406418

407419
user@node1:~$ kafkacat -b localhost -L
408420
Metadata for all topics (from broker -1: localhost:9092/bootstrap):
@@ -415,7 +427,7 @@ With `kafkacat` installed we can log into one of the worker nodes query the meta
415427
We should see 3 brokers listed, showing that Stackable has successfully deployed the brokers as a cluster.
416428

417429
=== Apache NiFi
418-
Apache NiFi provides a web interface and the easiest way to test it is to view this in a web browser. Browse to the address of one of your worker nodes on port 8080 e.f. http://node1.stackable:8080/nifi and you should see the NiFi Canvas.
430+
Apache NiFi provides a web interface and the easiest way to test it is to view this in a web browser. Browse to the address of one of your *worker* nodes on port 8080 e.f. http://node1.stackable:8080/nifi and you should see the NiFi Canvas.
419431

420432
image:nifi_menu.png[The Apache NiFi web interface]
421433

0 commit comments

Comments
 (0)