You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/tutorials/pages/end-to-end_data_pipeline_example.adoc
+55-22Lines changed: 55 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ This tutorial is intended to run in a private network or lab; it does not enable
12
12
You should make sure that you have everything you need:
13
13
14
14
* A running Kubernetes cluster
15
-
* https://kubernetes.io/docs/tasks/tools/#kubectl[Kubectl] to interact with the cluster
15
+
* https://kubernetes.io/docs/tasks/tools/#kubectl[kubectl] to interact with the cluster
16
16
* https://helm.sh/[Helm] to deploy third-party dependencies
17
17
* xref:stackablectl::installation.adoc[stackablectl] to install and interact with Stackable operators
18
18
+
@@ -34,7 +34,6 @@ Instructions for installing via Helm are also provided throughout the tutorial.
34
34
35
35
This section shows how to instantiate the first part of the entire processing chain, which will ingest CSV files from an S3 bucket, split the files into individual records and send these records to a Kafka topic.
36
36
37
-
38
37
=== Deploy the Operators
39
38
40
39
The resource definitions rolled out in this section need their respective Operators to be installed in the K8s cluster. I.e. to run a Kafka instance, the Kafka Operator needs to be installed.
To deploy Kafka and NiFi you can now apply the cluster configuration. You'll also need to deploy ZooKeeper, since both Kafka and NiFi depend on it. Run the following command in the console to deploy and configure all three services.
110
+
Since both Kafka and NiFi depend on Apache ZooKeeper, we will create a ZooKeeper cluster first.
112
111
113
112
[source,bash]
114
113
kubectl apply -f - <<EOF
@@ -118,7 +117,7 @@ kind: ZookeeperCluster
118
117
metadata:
119
118
name: simple-zk
120
119
spec:
121
-
version: 3.8.0
120
+
version: 3.8.0-stackable0.7.1
122
121
servers:
123
122
roleGroups:
124
123
default:
@@ -127,6 +126,14 @@ spec:
127
126
kubernetes.io/os: linux
128
127
replicas: 1
129
128
config: {}
129
+
EOF
130
+
131
+
=== Deploying Kafka and NiFi
132
+
133
+
To deploy Kafka and NiFi you can now apply the cluster configuration. Run the following command in the console to deploy and configure all three services.
134
+
135
+
[source,bash]
136
+
kubectl apply -f - <<EOF
130
137
---
131
138
apiVersion: zookeeper.stackable.tech/v1alpha1
132
139
kind: ZookeeperZnode
@@ -141,7 +148,7 @@ kind: KafkaCluster
141
148
metadata:
142
149
name: simple-kafka
143
150
spec:
144
-
version: 3.1.0
151
+
version: 3.2.0-stackable0.1.0
145
152
zookeeperConfigMapName: simple-kafka-znode
146
153
brokers:
147
154
config:
@@ -179,7 +186,7 @@ kind: NifiCluster
179
186
metadata:
180
187
name: simple-nifi
181
188
spec:
182
-
version: "1.15.0-stackable0.4.0"
189
+
version: 1.16.3-stackable0.1.0
183
190
zookeeperConfigMapName: simple-nifi-znode
184
191
config:
185
192
authentication:
@@ -375,13 +382,26 @@ Now that the Operator and Dependencies are set up, you can deploy the Druid clus
0 commit comments