You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -137,7 +138,7 @@ To generate starter specification run -
137
138
138
139
ads opctl init --help
139
140
140
-
The resource type is a mandatory attribute that needs to be provided. Currently supported resource types - `dataflow`, `deployment`, `job` and `pipeline`.
141
+
The resource type is a mandatory attribute that needs to be provided. Currently supported resource types - ``dataflow``, ``deployment``, ``job`` and ``pipeline``.
141
142
For instance to generate starter specification for the Data Science job, run -
142
143
143
144
.. code-block::
@@ -149,10 +150,10 @@ The resulting YAML will be printed in the console. By default the ``python`` run
149
150
150
151
**Supported runtimes**
151
152
152
-
- For a ``job`` - `container`, `gitPython`, `notebook`, `python` and `script`.
153
-
- For a ``pipeline`` - `container`, `gitPython`, `notebook`, `python` and `script`.
154
-
- For a ``dataflow`` - `dataFlow` and `dataFlowNotebook`.
155
-
- For a ``deployment`` - `conda` and `container`.
153
+
- For a ``job`` - ``container``, ``gitPython``, ``notebook``, ``python`` and ``script``.
154
+
- For a ``pipeline`` - ``container``, ``gitPython``, ``notebook``, ``python`` and ``script``.
155
+
- For a ``dataflow`` - ``dataFlow`` and ``dataFlowNotebook``.
156
+
- For a ``deployment`` - ``conda`` and ``container``.
156
157
157
158
158
159
If you want to specify a particular runtime use -
@@ -166,4 +167,3 @@ Use the ``--output`` attribute to save the result in a YAML file.
Copy file name to clipboardExpand all lines: docs/source/user_guide/operators/common/run.rst
+133-9Lines changed: 133 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,49 @@ The first step is to generate starter kit configurations that simplify the execu
19
19
20
20
.. code-block:: bash
21
21
22
-
ads operator init --type <operator-type>
22
+
ads operator init --help
23
+
24
+
.. figure:: figures/operator_init.png
25
+
:align:center
26
+
27
+
.. admonition:: Important
28
+
:class: warning
29
+
30
+
If the ``--merge-config`` flag is set to ``true``, the ``<operator-type>.yaml`` file will be merged with the backend configuration which contains pre-populated infrastructure and runtime sections. You don't need to provide a backend information separately in this case.
31
+
32
+
.. code-block:: bash
33
+
34
+
ads operator run -f <operator-type>.yaml
35
+
36
+
Alternatively ``ads opctl run`` command can be used:
37
+
38
+
.. code-block:: bash
39
+
40
+
ads opctl run -f <operator-type>.yaml
41
+
42
+
The operator will be run in chosen environment without requiring additional modifications.
43
+
44
+
45
+
Different Ways To Run Operator
46
+
------------------------------
47
+
48
+
To operator can be run in two different ways:
49
+
50
+
.. code-block:: bash
51
+
52
+
ads operator run -f <operator-config>.yaml
53
+
54
+
Or alternatively:
55
+
56
+
.. code-block:: bash
57
+
58
+
ads opctl run -f <operator-config>.yaml
59
+
60
+
Despite the presented above commands look equivalent, the ``ads operator run`` command is more flexible.
61
+
Here the few restrictions when running the operator within the ``ads opctl run`` command:
62
+
63
+
- The ``<operator-config>.yaml`` file must contain all the necessary information for running the operator. This means that the ``<operator-config>.yaml`` file must contain the ``runtime`` section describing the backend configuration for the operator.
64
+
- If the ``<operator-config>.yaml`` file not contains the ``runtime`` section, then the ``ads opctl run`` command can be used in restricted mode with ``-b`` option. This option allows you to specify the backend to run the operator on. The ``-b`` option can be used with the following backends: ``local``, ``dataflow``, ``job``. However you will not be able to use the ``-b`` option with the local ``container`` backend and Data Science Jobs ``container`` backend.
23
65
24
66
25
67
Run Operator Locally
@@ -34,20 +76,26 @@ To run the operator locally, follow these steps:
34
76
35
77
1. Create and activate a new conda environment named ``<operator-type>``.
36
78
2. Install all the required libraries listed in the ``environment.yaml`` file generated by the ``ads operator init --type <operator-type>`` command.
37
-
3. Review the ``<operator-type>.yaml`` file generated by the ``ads operator init`` command and make necessary adjustments to input and output file locations.
79
+
3. Review the ``<operator-type>.yaml`` file generated by the ``ads operator init`` command and make necessary adjustments to input and output file locations. Notice that the ``<operator-type>.yaml`` will not be generated if the ``--merge-config`` flag is set to ``true``.
38
80
4. Verify the operator's configuration using the following command:
39
81
40
82
.. code-block:: bash
41
83
42
-
ads operator verify -f <operator-type>.yaml
84
+
ads operator verify -f <operator-config>.yaml
43
85
44
86
5. To run the operator within the ``<operator-type>`` conda environment, use this command:
45
87
46
88
.. code-block:: bash
47
89
48
90
ads operator run -f <operator-type>.yaml -b local
49
91
50
-
The operator will be run in your local environment without requiring additional modifications.
92
+
The alternative way to run the operator would be to use the ``ads opctl run`` command:
93
+
94
+
.. code-block:: bash
95
+
96
+
ads opctl run -f <operator-type>.yaml -b local
97
+
98
+
See the `Different Ways To Run Operator <#different-ways-to-run-operator>`_ section for more details.
51
99
52
100
Within Container
53
101
~~~~~~~~~~~~~~~~
@@ -83,6 +131,21 @@ Following is the YAML schema for validating the runtime YAML using `Cerberus <ht
83
131
84
132
ads operator run -f <operator-type>.yaml -b backend_operator_local_container_config.yaml
85
133
134
+
Or within a short command:
135
+
136
+
.. code-block:: bash
137
+
138
+
ads operator run -f <operator-type>.yaml -b local.container
139
+
140
+
141
+
The alternative way to run the operator would be to use the ``ads opctl run`` command. However in this case the runtime information needs to be merged within operator's config. See the `Different Ways To Run Operator <#different-ways-to-run-operator>`_ section for more details.
142
+
143
+
.. code-block:: bash
144
+
145
+
ads opctl run -f <operator-type>.yaml
146
+
147
+
If the backend runtime information is not merged within operator's config, then there is no way to run the operator within the ``ads opctl run`` command using container runtime. The ``ads operator run`` command should be used instead.
148
+
86
149
87
150
Run Operator In Data Science Job
88
151
--------------------------------
@@ -122,22 +185,40 @@ To publish ``<operator-type>:<operator-version>`` to OCR, use this command:
122
185
123
186
After publishing the container to OCR, you can use it within Data Science jobs service. Check the ``backend_job_container_config.yaml`` configuration file built during initializing the starter configs for the operator. It should contain pre-populated infrastructure and runtime sections. The runtime section should have an image property, like ``image: iad.ocir.io/<tenancy>/<operator-type>:<operator-version>``.
124
187
125
-
1. Adjust the ``<operator-type>.yaml`` configuration with the proper input/output folders. When running operator in a Data Science job, it won't have access to local folders, so input data and output folders should be placed in the Object Storage bucket. Open the ``<operator-type>.yaml`` and adjust the data path fields.
188
+
3. Adjust the ``<operator-type>.yaml`` configuration with the proper input/output folders. When running operator in a Data Science job, it won't have access to local folders, so input data and output folders should be placed in the Object Storage bucket. Open the ``<operator-type>.yaml`` and adjust the data path fields.
126
189
127
-
2. Run the operator on the Data Science jobs using this command:
190
+
4. Run the operator on the Data Science jobs using this command:
128
191
129
192
.. code-block:: bash
130
193
131
194
ads operator run -f <operator-type>.yaml -b backend_job_container_config.yaml
132
195
133
-
You can run the operator within the ``--dry-run`` attribute to check the final configs that will be used to run the operator on the service.
196
+
Or within a short command:
197
+
198
+
.. code-block:: bash
199
+
200
+
ads operator run -f <operator-type>.yaml -b job.container
201
+
202
+
In this case the backend config will be built on the fly.
203
+
However the recommended way would be to use explicit configurations for both operator and backend.
204
+
205
+
The alternative way to run the operator would be to use the ``ads opctl run`` command. However in this case the runtime information needs to be merged within operator's config. See the `Different Ways To Run Operator <#different-ways-to-run-operator>`_ section for more details.
206
+
207
+
.. code-block:: bash
208
+
209
+
ads opctl run -f <operator-type>.yaml
210
+
211
+
If the backend runtime information is not merged within operator's config, then there is no way to run the operator within the ``ads opctl run`` command using container runtime. The ``ads operator run`` command should be used instead.
212
+
213
+
You can run the operator within the ``--dry-run`` attribute to check the final configs that will be used to run the operator on the service. This command will not run the operator, but will print the final configs that will be used to run the operator on the service.
134
214
135
215
Running the operator will return a command to help you monitor the job's logs:
136
216
137
217
.. code-block:: bash
138
218
139
219
ads opctl watch <OCID>
140
220
221
+
141
222
Run With Conda Environment
142
223
~~~~~~~~~~~~~~~~~~~~~~~~~~
143
224
@@ -173,7 +254,28 @@ For more details on configuring the CLI, refer to the :doc:`Explore & Configure
173
254
174
255
.. code-block:: bash
175
256
176
-
ads operator run -f <operator-type>.yaml --backend-config backend_job_python_config.yaml
257
+
ads operator run -f <operator-type>.yaml -b backend_job_python_config.yaml
258
+
259
+
Or within a short command:
260
+
261
+
.. code-block:: bash
262
+
263
+
ads operator run -f <operator-type>.yaml -b job
264
+
265
+
In this case the backend config will be built on the fly.
266
+
However the recommended way would be to use explicit configurations for both operator and backend.
267
+
268
+
The alternative way to run the operator would be to use the ``ads opctl run`` command. However in this case the runtime information needs to be merged within operator's config. See the `Different Ways To Run Operator <#different-ways-to-run-operator>`_ section for more details.
269
+
270
+
.. code-block:: bash
271
+
272
+
ads opctl run -f <operator-type>.yaml
273
+
274
+
Or if the backend runtime information is not merged within operator's config:
275
+
276
+
.. code-block:: bash
277
+
278
+
ads opctl run -f <operator-type>.yaml -b job
177
279
178
280
6. Monitor the logs using the ``ads opctl watch`` command::
179
281
@@ -217,7 +319,29 @@ After publishing the conda environment to Object Storage, you can use it within
217
319
218
320
.. code-block:: bash
219
321
220
-
ads operator run -f <operator-type>.yaml --backend-config backend_dataflow_dataflow_config.yaml
322
+
ads operator run -f <operator-type>.yaml -b backend_dataflow_dataflow_config.yaml
323
+
324
+
Or within a short command:
325
+
326
+
.. code-block:: bash
327
+
328
+
ads operator run -f <operator-type>.yaml -b dataflow
329
+
330
+
In this case the backend config will be built on the fly.
331
+
However the recommended way would be to use explicit configurations for both operator and backend.
332
+
333
+
The alternative way to run the operator would be to use the ``ads opctl run`` command. However in this case the runtime information needs to be merged within operator's config. See the `Different Ways To Run Operator <#different-ways-to-run-operator>`_ section for more details.
334
+
335
+
.. code-block:: bash
336
+
337
+
ads opctl run -f <operator-type>.yaml
338
+
339
+
Or if the backend runtime information is not merged within operator's config:
340
+
341
+
.. code-block:: bash
342
+
343
+
ads opctl run -f <operator-type>.yaml -b dataflow
344
+
221
345
222
346
5. Monitor the logs using the ``ads opctl watch`` command::
0 commit comments