You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/ROOT/pages/getting_started.adoc
+22-19Lines changed: 22 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,30 +74,32 @@ CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/servi
74
74
Metrics-server is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
75
75
----
76
76
77
-
=== Installing Helm
78
-
Stackable uses https://helm.sh/[Helm] as the package manager for its Kubernetes operators. This greatly simplifies the deployment and management of Kubernetes operators and CRDs. The quickest way to install Helm is to run the following command:
77
+
== Installing Stackable
78
+
=== Install stackablectl
79
+
80
+
Install the Stackable command line utility xref:stackablectl::index.adoc[stackablectl] by following the installation steps for your platform on the xref:stackablectl::installation.adoc[installation] page.
81
+
82
+
=== Installing Stackable Operators
83
+
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the controller node.
84
+
85
+
Stackable operators are installed using stackablectl. Run the following commands to install the latest operator versions of ZooKeeper, Kafka and NiFi.
With Helm installed you can add the Stackable operator repo, where the helm charts to install Stackable operators can be found. There are development, test and a stable repositories available, we'll be using the stable repo in this guide.
The Stackable operators are components that translate the service definitions deployed via Kubernetes into deploy services on the worker nodes. These can be installed on any node that has access to the Kubernetes control plane. In this example we will install them on the controller node.
97
-
98
-
Stackable operators are installed using Helm charts. Run the following commands to install the operators for ZooKeeper, Kafka and NiFi using the repo configured earlier. The --devel flag will choose the latest available version; alternatively the --version flag can be used to deploy a specific version.
99
-
100
-
==== Install operators from the Stackable repository
Apache Spark is a multi-language engine for executing data engineering, data science, and machine learning on single-node machines or clusters.
196
196
197
-
Stackable offers two different operators for managing Apache Spark clusters. One for multi-tenancy or standalone clusters and one for isolated workloads.
198
-
199
-
xref:spark::index.adoc[Read more] for more information on how to run a multi-tenancy Spark clusters with Stackable.
200
-
201
-
For more information on how to run isolated Spark workloads with Stackable operators xref:spark-k8s::index.adoc[see here].
197
+
xref:spark-k8s::index.adoc[Read more]
202
198
203
199
++++
204
200
</div>
@@ -212,7 +208,7 @@ For more information on how to run isolated Spark workloads with Stackable opera
212
208
<h3>Apache Superset</h3>
213
209
++++
214
210
215
-
Apache Superset is a modern data exploration and visualization platform
211
+
Apache Superset is a modern data exploration and visualization platform.
The diagram above shows three examples of how the objects can be
52
52
structured.
53
53
// Option 1
54
-
In option 1 all objects are separate from each other. This provides maximum reusability because the same connection or bucket object can be referenced by multiple resources. It also allows for separation of concerts across team members. Cluster administrators can define S3 connection objects that developers reference in their applications.
54
+
In option 1 all objects are separate from each other. This provides maximum re-usability because the same connection or bucket object can be referenced by multiple resources. It also allows for separation of concerns across team members. Cluster administrators can define S3 connection objects that developers reference in their applications.
55
55
// Option 2
56
56
In option 2 the bucket is inlined in the cluster definition. This makes sense if you have a dedicated bucket for a specific purpose, if it is only used in this one cluster instance, in this single product.
57
57
// Option 3
@@ -83,7 +83,7 @@ The inline definition is variant 3 in the figure above.
Copy file name to clipboardExpand all lines: modules/reference/pages/s3.adoc
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
= S3 resources
2
2
3
-
This page contains the reference information for th S3Bucket and S3Connection resources. For guidance on usage, see xref:concepts:s3.adoc[S3 resources concept].
3
+
This page contains the reference information for the S3Bucket and S3Connection resources. For guidance on usage, see xref:concepts:s3.adoc[S3 resources concept].
4
4
5
5
== S3Bucket
6
6
@@ -9,9 +9,9 @@ A bucket consists of a bucket name and the connection to the object store where
9
9
* `name`: `String`, the name of the Bucket.
10
10
* `connection`: can either be `inline` or `reference`.
11
11
** `inline`: See the properties below for <<S3Connection>>.
12
-
** `reference`: `String`, the name of the referenced S3Connection resource. Must be in the same namespace as the S3Bucket resource.
12
+
** `reference`: `String`, the name of the referenced S3Connection resource, which must be in the same namespace as the S3Bucket resource.
0 commit comments