Skip to content

Commit d738d0e

Browse files
authored
Adapt to release 2206 (#228)
* started adapting to release 2206 * adapted to new releases / crds / nifi version * capitalized helm -> Helm
1 parent 3794e4e commit d738d0e

File tree

2 files changed

+62
-29
lines changed

2 files changed

+62
-29
lines changed

modules/tutorials/attachments/s3-kafka.xml

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -93,7 +93,7 @@
9393
<bundle>
9494
<artifact>nifi-record-serialization-services-nar</artifact>
9595
<group>org.apache.nifi</group>
96-
<version>1.15.0</version>
96+
<version>1.16.3</version>
9797
</bundle>
9898
<comments/>
9999
<descriptors>
@@ -379,7 +379,7 @@
379379
<bundle>
380380
<artifact>nifi-record-serialization-services-nar</artifact>
381381
<group>org.apache.nifi</group>
382-
<version>1.15.0</version>
382+
<version>1.16.3</version>
383383
</bundle>
384384
<comments/>
385385
<descriptors>
@@ -594,7 +594,7 @@
594594
<bundle>
595595
<artifact>nifi-record-serialization-services-nar</artifact>
596596
<group>org.apache.nifi</group>
597-
<version>1.15.0</version>
597+
<version>1.16.3</version>
598598
</bundle>
599599
<comments/>
600600
<descriptors>
@@ -738,7 +738,7 @@
738738
<bundle>
739739
<artifact>nifi-aws-nar</artifact>
740740
<group>org.apache.nifi</group>
741-
<version>1.15.0</version>
741+
<version>1.16.3</version>
742742
</bundle>
743743
<config>
744744
<bulletinLevel>WARN</bulletinLevel>
@@ -1017,7 +1017,7 @@
10171017
<bundle>
10181018
<artifact>nifi-aws-nar</artifact>
10191019
<group>org.apache.nifi</group>
1020-
<version>1.15.0</version>
1020+
<version>1.16.3</version>
10211021
</bundle>
10221022
<config>
10231023
<bulletinLevel>WARN</bulletinLevel>
@@ -1258,7 +1258,7 @@
12581258
<bundle>
12591259
<artifact>nifi-kafka-2-6-nar</artifact>
12601260
<group>org.apache.nifi</group>
1261-
<version>1.15.0</version>
1261+
<version>1.16.3</version>
12621262
</bundle>
12631263
<config>
12641264
<bulletinLevel>WARN</bulletinLevel>
@@ -1577,7 +1577,7 @@
15771577
<bundle>
15781578
<artifact>nifi-standard-nar</artifact>
15791579
<group>org.apache.nifi</group>
1580-
<version>1.15.0</version>
1580+
<version>1.16.3</version>
15811581
</bundle>
15821582
<config>
15831583
<bulletinLevel>WARN</bulletinLevel>

modules/tutorials/pages/end-to-end_data_pipeline_example.adoc

Lines changed: 55 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This tutorial is intended to run in a private network or lab; it does not enable
1212
You should make sure that you have everything you need:
1313

1414
* A running Kubernetes cluster
15-
* https://kubernetes.io/docs/tasks/tools/#kubectl[Kubectl] to interact with the cluster
15+
* https://kubernetes.io/docs/tasks/tools/#kubectl[kubectl] to interact with the cluster
1616
* https://helm.sh/[Helm] to deploy third-party dependencies
1717
* xref:stackablectl::installation.adoc[stackablectl] to install and interact with Stackable operators
1818
+
@@ -34,7 +34,6 @@ Instructions for installing via Helm are also provided throughout the tutorial.
3434

3535
This section shows how to instantiate the first part of the entire processing chain, which will ingest CSV files from an S3 bucket, split the files into individual records and send these records to a Kafka topic.
3636

37-
3837
=== Deploy the Operators
3938

4039
The resource definitions rolled out in this section need their respective Operators to be installed in the K8s cluster. I.e. to run a Kafka instance, the Kafka Operator needs to be installed.
@@ -85,7 +84,7 @@ stackablectl operator install kafka
8584
====
8685
[source,bash]
8786
----
88-
helm install zookeeper-operator stackable-stable/kafka-operator
87+
helm install kafka-operator stackable-stable/kafka-operator
8988
----
9089
====
9190

@@ -95,20 +94,20 @@ NiFi is an ETL tool which will be used to model the dataflow of downloading and
9594
It will also be used to convert the file content from CSV to JSON.
9695

9796
[source,bash]
98-
stackablectl operator install nifi=0.6.0-nightly
97+
stackablectl operator install nifi
9998

10099
.Using Helm instead
101100
[%collapsible]
102101
====
103102
[source,bash]
104103
----
105-
helm install --repo https://repo.stackable.tech/repository/helm-dev nifi-operator nifi-operator --version=0.6.0-nightly
104+
helm install nifi-operator stackable-stable/nifi-operator
106105
----
107106
====
108107

109-
=== Deploying Kafka and NiFi
108+
=== Deploying ZooKeeper
110109

111-
To deploy Kafka and NiFi you can now apply the cluster configuration. You'll also need to deploy ZooKeeper, since both Kafka and NiFi depend on it. Run the following command in the console to deploy and configure all three services.
110+
Since both Kafka and NiFi depend on Apache ZooKeeper, we will create a ZooKeeper cluster first.
112111

113112
[source,bash]
114113
kubectl apply -f - <<EOF
@@ -118,7 +117,7 @@ kind: ZookeeperCluster
118117
metadata:
119118
name: simple-zk
120119
spec:
121-
version: 3.8.0
120+
version: 3.8.0-stackable0.7.1
122121
servers:
123122
roleGroups:
124123
default:
@@ -127,6 +126,14 @@ spec:
127126
kubernetes.io/os: linux
128127
replicas: 1
129128
config: {}
129+
EOF
130+
131+
=== Deploying Kafka and NiFi
132+
133+
To deploy Kafka and NiFi you can now apply the cluster configuration. Run the following command in the console to deploy and configure all three services.
134+
135+
[source,bash]
136+
kubectl apply -f - <<EOF
130137
---
131138
apiVersion: zookeeper.stackable.tech/v1alpha1
132139
kind: ZookeeperZnode
@@ -141,7 +148,7 @@ kind: KafkaCluster
141148
metadata:
142149
name: simple-kafka
143150
spec:
144-
version: 3.1.0
151+
version: 3.2.0-stackable0.1.0
145152
zookeeperConfigMapName: simple-kafka-znode
146153
brokers:
147154
config:
@@ -179,7 +186,7 @@ kind: NifiCluster
179186
metadata:
180187
name: simple-nifi
181188
spec:
182-
version: "1.15.0-stackable0.4.0"
189+
version: 1.16.3-stackable0.1.0
183190
zookeeperConfigMapName: simple-nifi-znode
184191
config:
185192
authentication:
@@ -375,13 +382,26 @@ Now that the Operator and Dependencies are set up, you can deploy the Druid clus
375382

376383
[source]
377384
kubectl apply -f - <<EOF
385+
---
386+
apiVersion: secrets.stackable.tech/v1alpha1
387+
kind: SecretClass
388+
metadata:
389+
name: druid-s3-credentials
390+
spec:
391+
backend:
392+
k8sSearch:
393+
searchNamespace:
394+
pod: {}
395+
---
378396
apiVersion: v1
379397
kind: Secret
380398
metadata:
381399
name: druid-s3-credentials
400+
labels:
401+
secrets.stackable.tech/class: druid-s3-credentials
382402
stringData:
383-
accessKeyId: minioAccessKey
384-
secretAccessKey: minioSecretKey
403+
accessKey: minioAccessKey
404+
secretKey: minioSecretKey
385405
EOF
386406

387407
And now the cluster definition:
@@ -393,7 +413,7 @@ kind: DruidCluster
393413
metadata:
394414
name: druid-nytaxidata
395415
spec:
396-
version: 0.22.1
416+
version: 0.23.0-stackable0.1.0
397417
zookeeperConfigMapName: simple-druid-znode # <1>
398418
metadataStorageDatabase: # <2>
399419
dbType: postgresql
@@ -402,13 +422,26 @@ spec:
402422
port: 5432
403423
user: druid
404424
password: druid
405-
s3:
406-
endpoint: http://minio:9000
407-
credentialsSecret: druid-s3-credentials # <3>
425+
ingestion:
426+
s3connection:
427+
inline:
428+
host: http://minio
429+
port: 9000
430+
accessStyle: Path
431+
credentials:
432+
secretClass: druid-s3-credentials # <3>
408433
deepStorage:
409-
storageType: s3
410-
bucket: nytaxidata
411-
baseKey: storage
434+
s3:
435+
bucket:
436+
inline:
437+
bucketName: nytaxidata
438+
connection:
439+
inline:
440+
host: http://minio
441+
port: 9000
442+
accessStyle: Path
443+
credentials:
444+
secretClass: druid-s3-credentials # <3>
412445
brokers:
413446
configOverrides:
414447
runtime.properties:
@@ -485,7 +518,7 @@ kubectl port-forward svc/druid-nytaxidata-router 8888
485518

486519
Keep this command running to continue accessing the Router port locally.
487520

488-
The UI should now be reachable at http://localhost:8888 and should look like the screenshot below. Start with the Load Data option:
521+
The UI should now be reachable at http://localhost:8888 and should look like the screenshot below. Start with the "Load Data" and "New Spec" option:
489522

490523
image::end-to-end_data_pipeline_example/druid-main.png[Main Screen]
491524

@@ -568,7 +601,7 @@ stackablectl operator install superset
568601
====
569602
[source,bash]
570603
----
571-
helm install druid-operator stackable-stable/superset-operator
604+
helm install superset-operator stackable-stable/superset-operator
572605
----
573606
====
574607

@@ -614,7 +647,7 @@ kind: SupersetCluster
614647
metadata:
615648
name: simple-superset
616649
spec:
617-
version: 1.4.1 # <1>
650+
version: 1.5.1-stackable0.1.0 # <1>
618651
statsdExporterVersion: v0.22.4
619652
credentialsSecret: simple-superset-credentials # <2>
620653
nodes:

0 commit comments

Comments
 (0)