Skip to content

Conversation

@aazam-gh
Copy link
Contributor

What

It's useful for twilio users to keep track of their workspaces. This connector adds workspace data from a user's twilio account using the workspace ID

How

This connector adds the following stream

  • workspaces

Recommended reading order

  1. twilio_taskrouter.yaml
  2. spec.yaml

🚨 User Impact 🚨

No changes to existing code

Pre-merge Checklist

New Connector

Community member or Airbyter

  • Community member? Grant edit access to maintainers (instructions)
  • Secrets in the connector's spec are annotated with airbyte_secret
  • Unit & integration tests added and passing. Community members, please provide proof of success locally e.g: screenshot or copy-paste unit, integration, and acceptance test output. To run acceptance tests for a Python connector, follow instructions in the README. For java connectors run ./gradlew :airbyte-integrations:connectors:<name>:integrationTest.
  • Code reviews completed
  • Documentation updated
    • Connector's README.md
    • Connector's bootstrap.md. See description and examples
    • docs/integrations/<source or destination>/<name>.md including changelog. See changelog example
    • docs/integrations/README.md
    • airbyte-integrations/builds.md
  • PR name follows PR naming conventions

Airbyter

If this is a community PR, the Airbyte engineer reviewing this PR is responsible for the below items.

  • Create a non-forked branch based on this PR and test the below items on it
  • Build is successful
  • If new credentials are required for use in CI, add them to GSM. Instructions.
  • /test connector=connectors/<name> command is passing
  • New Connector version released on Dockerhub by running the /publish command described here
  • After the connector is published, connector added to connector index as described here
  • Seed specs have been re-generated by building the platform and committing the changes to the seed spec files, as described here

Tests

Unit

None

Integration

image

Acceptance

image

@CLAassistant
Copy link

CLAassistant commented Oct 31, 2022

CLA assistant check
All committers have signed the CLA.

@github-actions github-actions bot added area/connectors Connector related issues area/documentation Improvements or additions to documentation labels Oct 31, 2022
@sajarin sajarin added the bounty-XL Maintainer program: claimable extra large bounty PR label Oct 31, 2022
@marcosmarxm marcosmarxm changed the title New Source: Twilio Taskrouter API 🎉 New Source: Twilio Taskrouter API [low-code cdk] Oct 31, 2022
@YiyangLi
Copy link
Contributor

@Alcadeus0 Thanks for your contribution. I will help to review it, will get back to you EOD.

@aazam-gh
Copy link
Contributor Author

aazam-gh commented Nov 1, 2022

Sure!

Copy link
Contributor

@YiyangLi YiyangLi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a good start. The source connector only returns one single workspace. I would encourage you to implement more streams under Taskrouter. Thanks.

record_selector:
$ref: "*ref(definitions.selector)"
paginator:
type: NoPagination
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand that only one workspace is fetched, so there's no need to paginate. But if sid is offered, you can fetch other resources under the workspace. Can you implement one that requires a paginator?

And the Twilio's paginator is applied across all APIs, including taskrouter. Can you leverage codes in the existing source-twilio?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you want me to create another workspace stream and have the user enter the resource name? Because certain resources have indicated to avoid use of Page query in the taskrouter api

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A paginator would be needed when the response is too long, or the resource set is huge. Consequently, the requester has to include the page number or another form of cursor to get the remaining parts.

For workspaces, the API "will return the first 50 Workspaces. Supply a PageSize parameter to fetch more than 50 Workspaces." Maybe you can't create multiple workspaces, but I guess the resource set is huge on other streams, for example, activities

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure that works, so for the workspaces stream I'll have to add another optional parameter for the Page Size. Can you point me to a cdk doc on how to implement it? It would be great

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"will return the first 50 Workspaces. Supply a PageSize parameter to fetch more than 50 Workspaces."

It comes from Twilio's API doc, you may supply the parameter to get more than 50. And you may pick or implement a paginator in order to get other pages. For example, the PageSize is 100, and there are 500 entities, there will be 5 pages in total. Airbyte needs to send 5 requests to complete a full import.

My suggestion is to have a workspaces stream that fetches all workspaces, and add another stream that depends on the required field workspaceId.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the doc for the paginator

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm having some trouble implementing the paginator, so I've set it up so that the user can enter a page_size input and page_size reads it as page_size: "{{config['number']}}" but how do i convert it into an integer from a string

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

number is not part of spec.yaml. When you add it to the spec, you could set the type of an attribute to integer.

My understanding of page_size is it defines the number of entities in a page. You could set a constant number, for example, 50. Even a customer is allowed to use a large number, say 65535, the server-side on twilio may cap it to another number, say 200. And if there are thousands of entities, you need to turn the next page by including the page number.

https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/source-greenhouse/source_greenhouse/greenhouse.yaml

source-greenhouse is a good example, the greenhouse includes the page number in the headers of a response. It's an industry standard called RFC 8288. Twilio envelopes the entities, and set page_size, page_number and even the link to next page, called next_page_uri in the response body. Something like the following:

{ "page": 0, "page_size": 50, "uri": "\/2010-04-01\/Accounts\/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\/Calls.json", "first_page_uri": "\/2010-04-01\/Accounts\/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\/Calls.json?Page=0&PageSize=50", "previous_page_uri": null, "next_page_uri": "\/2010-04-01\/Accounts\/ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\/Calls.json?Page=1&PageSize=50&AfterSid=CA228399228abecca920de212121", "calls": [ 

To test your pagination strategy, you don't need to create excessive resources. You could simply set page_size to a small number. For example, you have 6 workers, and the page size is 2, you need to paginate 3 times. After you run the read command, airbyte will tell you how many workers are read.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

source-greenhouse is a good example, the greenhouse includes the page number in the headers of a response. It's an industry standard called RFC 8288. Twilio envelopes the entities, and set page_size, page_number and even the link to next page, called next_page_uri in the response body. Something like the following:

Yes, I was confused at first when the response header did not indicate the page_size and number but as you explained it enveloping it makes sense. I'll add the paginator source to the workers stream with the example values you provided and see how it goes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@YiyangLi could you take a look at the changes and see if I need anything else?

Copy link
Contributor

@YiyangLi YiyangLi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's good to see the progression. Can you share how to create


check:
stream_names:
- "allworkspaces"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you only need one stream to verify the check

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure I'll remove the other two streams

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it still requires fetching all 3 streams to verify the connectivity. Can you simplify it? Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah i seemed to have made the new changes in only my topic branch. I'll push the changes to this branch

path: "/v1/Workspaces"
primary_key: "sid"
retriever:
$ref: "*ref(definitions.base_stream.retriever)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you need the paginator for allworkspaces? I understand that in your test account, there's only one workspace, but for others, there might be more than 50, and a paginator is required. And to avoid a duplicated codes, you can have an incremental_stream that extends base_stream, and you can move the paginator from workers to the incremental stream. workers and allworkspaces extends the incremental_stream, and incremental_stream extends base_stream.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the incremental stream implemented using stream slicers? If so, the date-updated property of a workspace can be used


1. Navigate to the Airbyte Open Source dashboard.
2. Set the name for your source.
3. Enter your `account_sid`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you provide more twilio links to tell users where to find the account_sid, auth_token and sid? Users don't see sid in the UI, the label text is the description, which is workspace Id.

@YiyangLi
Copy link
Contributor

YiyangLi commented Nov 4, 2022

I just realized that you create the PR based on the master branch in the forked repository, can you use a topic branch?

@YiyangLi
Copy link
Contributor

YiyangLi commented Nov 4, 2022

I run the command to fetch data, including workers, the read never stops. I guess it's the problem on the pagination, you need to tell airbyte to stop when it reaches to the last page. It happens when the next_page_url is null.

Take the following payload, it means the page size is 2, and the next page is page 1 (0-index based):

{ "workers": [ ...some workers here, skip it. ], "meta": { "page": 0, "page_size": 2, "first_page_url": "https://taskrouter.twilio.com/v1/Workspaces/WSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Workers?PageSize=2&Page=0", "previous_page_url": null, "url": "https://taskrouter.twilio.com/v1/Workspaces/WSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Workers?PageSize=2&Page=0", "next_page_url": "https://taskrouter.twilio.com/v1/Workspaces/WSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Workers?PageSize=2&Page=1&PageToken=PAWK1WSXXXXXXXXXXXXXXXX", "key": "workers" } } 

while for the next response, the next_page_url is null, meaning it reaches to the end.

{ "workers": [ ...some workers here, skip it. ], "meta": { "page": 0, "page_size": 2, "first_page_url": "https://taskrouter.twilio.com/v1/Workspaces/WSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Workers?PageSize=2&Page=0", "previous_page_url": null, "url": "https://taskrouter.twilio.com/v1/Workspaces/WSXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/Workers?PageSize=2&Page=0", "next_page_url": null, "key": "workers" } } 

You could reproduce the problem by setting a small page_size.

@aazam-gh
Copy link
Contributor Author

aazam-gh commented Nov 5, 2022

I just realized that you create the PR based on the master branch in the forked repository, can you use a topic branch?

I created a new branch twilio-taskrouter but is there a way to edit the pull request to point to it instead of master branch? or should i create a new pr for it?

@aazam-gh
Copy link
Contributor Author

aazam-gh commented Nov 5, 2022

I fixed the endless read error when paginating by implementing CursorPagination as it has a stop condition parameter and giving it a value of null ends when no more records are to be read

@YiyangLi
Copy link
Contributor

YiyangLi commented Nov 6, 2022

I just realized that you create the PR based on the master branch in the forked repository, can you use a topic branch?

I created a new branch twilio-taskrouter but is there a way to edit the pull request to point to it instead of master branch? or should i create a new pr for it?

It's okay, just leave it as is. In order to be qualified for the Octoberfest bounty, you need to create the PR before Nov 4th.

Please make sure you share the edit permissions with us. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork Me or the CI will push commits in order to get your codes published.

@YiyangLi
Copy link
Contributor

YiyangLi commented Nov 6, 2022

I fixed the endless read error when paginating by implementing CursorPagination as it has a stop condition parameter and giving it a value of null ends when no more records are to be read

Not sure if you pushed your local commits correctly, the read in my local environment never ends. The version I am running is 2c6a104.

You could try it by the following commands.

Build

docker build . -t airbyte/source-twilio-taskrouter:dev 

Run

docker run --rm -v $(pwd)/secrets:/secrets -v $(pwd)/integration_tests:/integration_tests airbyte/source-twilio-taskrouter:dev read --config /secrets/config.json --catalog /integration_tests/configured_catalog.json 

In my Twilio account, I have 34 workers. I didn't set the page_size in the secrets/config.json. My config is similar to the sample_config.json you provide.

extractor:
field_pointer: ["workers"]
paginator:
type: "DefaultPaginator"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the CursorPagination you refer to, did you forget to push your commits?

@YiyangLi
Copy link
Contributor

YiyangLi commented Nov 6, 2022

I got the error.

{"type": "LOG", "log": {"level": "FATAL", "message": "\"The requested stream workspaces was not found in the source. Available streams: dict_keys(['allworkspaces']) 
field_pointer: ["workers"]

streams:
- "*ref(definitions.allworkspaces_stream)"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to list other streams if they are configured in integration_tests/configured_catalog.json.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I listed the other streams

inject_into: "request_parameter"
field_name: "page_size"
pagination_strategy:
type: "CursorPagination"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the paginator doesn't turn to the next page. In my test account, I have 57 workers, I would expect the connector firstly fetches the first 50 workers as 50 is the default page size, and it increases the page to 1 and use page=1 to get the remaining 7. But only the first 50 are read.

{"type": "LOG", "log": {"level": "INFO", "message": "Read 50 records from workers stream"}} {"type": "LOG", "log": {"level": "INFO", "message": "Finished syncing workers"}} {"type": "LOG", "log": {"level": "INFO", "message": "SourceTwilioTaskrouter runtimes:\nSyncing stream allworkspaces 0:00:00.364470\nSyncing stream workers 0:00:00.780400\nSyncing stream workspaces 0:00:00.384402"}} {"type": "LOG", "log": {"level": "INFO", "message": "Finished syncing SourceTwilioTaskrouter"}} 
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've made sure the edit access for maintainers is checked, just need to fix the paginator. What could be the reason for it to be unable to read the remaining records. I'll try testing it out by creating more on my test acc

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you run the test again and see if it works?

Copy link
Contributor

@YiyangLi YiyangLi Nov 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What could be the reason for it to be unable to read the remaining records

The reason is that you need to parse response.body.meta, get the page number in the next_page_url, and use it to send the request again, until next_age_url is null.

request_url = "base_url/v1/workers"; do { response = request.get(request_url); request_url = response.body.meta.next_page_url; } while (request_url != null) 

The pseudo-code looks like above.

Copy link
Contributor Author

@aazam-gh aazam-gh Nov 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where would i need to enter the code? in which file specifically? Is it by done by defining custom components?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since a new paginator is introduced, please run unit tests, integration tests, and acceptance test in your local environment and provide the logs again. The one in the PR description is out-dated. Thanks.

https://docs.airbyte.com/connector-development/testing-connectors/source-acceptance-tests-reference

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with 5 workers and a page size of 2 and obtain the desired output
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is the log of the new sets of tests
image

Thanks for all your help, really appreciate it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are able to fetch 5 workers because the page_size of 2 is not passed to the request parameters. The property in the config.json plays no effects.

I still don't get 57 workers. When I use a different page_size, the result is the same, meaning only the first page is fetched.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where would i need to enter the code? in which file specifically? Is it by done by defining custom components?

I just offer you a pseudo-code to explain the algorithm behind, you need to set up the paginator correctly so that it works like what the algorithm describes. According to your yaml file, your paginator doesn't parse the response at all. Check greenhouse.yaml for an example.

field_name: "page_size"
pagination_strategy:
type: "CursorPagination"
cursor_value: "{{ headers['url']['next'] }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you try to tell the paginator to get the cursor value from the headers. Are you sure the headers contain the value you want?

The following is an example response header when I try to GET /Workers

Date: Mon, 07 Nov 2022 04:10:22 GMT Content-Type: application/json Content-Length: 4151 Connection: keep-alive x-rate-limit-remaining: 50 x-rate-limit-limit: 50, 50;window=1 Twilio-Concurrent-Requests: 1 x-rate-limit-config: Worker-List Twilio-Request-Id: RQ5982d0470f1e214bfd748a8eb699f817 Twilio-Request-Duration: 0.021 Access-Control-Allow-Origin: * Access-Control-Allow-Headers: Accept, Authorization, Content-Type, If-Match, If-Modified-Since, If-None-Match, If-Unmodified-Since, Idempotency-Key Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS Access-Control-Expose-Headers: ETag Access-Control-Allow-Credentials: true X-Powered-By: AT-5000 X-Shenanigans: none X-Home-Region: us1 X-API-Domain: taskrouter.twilio.com Strict-Transport-Security: max-age=31536000 
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I'll figure out the appropriate cursor value to solve the response header

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the cursor value doesn't come from the header. It's part of the response body, it's body["meta"]["next_page_uri"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had tried that but it shows that body is not defined

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm trying with response['meta']['next_page_url'] but it shows the JSON Decode Error at line 1 col 1 char(0)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a case that next_page_url is null, which indicates you have reached the last page. And maybe that's the error reason of JSON deserializer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When i try checking the API, it does show the next page url. Could it maybe related to the page_token injecting the path?
image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or maybe it's because I am using a record extractor to filter out the meta json respone?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When i try checking the API, it does show the next page url. Could it maybe related to the page_token injecting the path? image

The next_page_url is the URL for the next page, you will get another next_page_url in the meta with this URL, until the value is null. This is the pseudo-code I provided you above. The right pagination strategy or paginator in the YAML works like the pseudo-code if it's configured correctly. You can also manually change the URL in a tool like postman to understand how it works.

request_url = "base_url/v1/workers"; do { response = request.get(request_url); request_url = response.body.meta.next_page_url; } while (request_url != null) 
Copy link
Contributor Author

@aazam-gh aazam-gh Nov 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some adjustments and now it reads all the records, with the page size being 50

Screenshot from 2022-11-15 12-02-46

Copy link
Contributor

@YiyangLi YiyangLi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[ ] pass the page_size to request parameters correctly since you put it under the spec.json
[ ] parse the cursor value correctly from the response, and use it for the next request if not null.

@marcosmarxm
Copy link
Contributor

marcosmarxm commented Nov 18, 2022

/test connector=connectors/source-twilio-taskrouter

🕑 connectors/source-twilio-taskrouter https://github.com/airbytehq/airbyte/actions/runs/3498970594
✅ connectors/source-twilio-taskrouter https://github.com/airbytehq/airbyte/actions/runs/3498970594
Python tests coverage:

 Name Stmts Miss Cover Missing ---------------------------------------------------------------------------------- source_acceptance_test/base.py 12 4 67% 16-19 source_acceptance_test/config.py 139 5 96% 87, 93, 235, 239-240 source_acceptance_test/conftest.py 196 92 53% 35, 41-43, 48, 54, 60, 66, 72-74, 93, 98-100, 106-108, 114-115, 120-121, 126, 132, 141-150, 156-161, 176, 200, 231, 237, 243-248, 256-261, 269-282, 287-293, 300-311, 318-334 source_acceptance_test/plugin.py 69 25 64% 22-23, 31, 36, 120-140, 144-148 source_acceptance_test/tests/test_core.py 398 111 72% 53, 58, 87-95, 100-107, 111-112, 116-117, 299, 337-354, 363-371, 375-380, 386, 419-424, 462-469, 512-514, 517, 582-590, 602-605, 610, 666-667, 673, 676, 712-722, 735-760 source_acceptance_test/tests/test_incremental.py 158 14 91% 52-59, 64-77, 240 source_acceptance_test/utils/asserts.py 37 2 95% 57-58 source_acceptance_test/utils/common.py 94 10 89% 16-17, 32-38, 72, 75 source_acceptance_test/utils/compare.py 62 23 63% 21-51, 68, 97-99 source_acceptance_test/utils/connector_runner.py 112 50 55% 23-26, 32, 36, 39-68, 71-73, 76-78, 81-83, 86-88, 91-93, 96-114, 148-150 source_acceptance_test/utils/json_schema_helper.py 107 13 88% 30-31, 38, 41, 65-68, 96, 120, 192-194 ---------------------------------------------------------------------------------- TOTAL 1563 349 78% 

Build Passed

Test summary info:

=========================== short test summary info ============================ SKIPPED [1] ../usr/local/lib/python3.9/site-packages/source_acceptance_test/plugin.py:63: Skipping TestIncremental.test_two_sequential_reads: not found in the config. SKIPPED [1] ../usr/local/lib/python3.9/site-packages/source_acceptance_test/tests/test_core.py:88: The previous connector image could not be retrieved. SKIPPED [1] ../usr/local/lib/python3.9/site-packages/source_acceptance_test/tests/test_core.py:364: The previous connector image could not be retrieved. ================= 26 passed, 3 skipped, 29 warnings in 25.33s ================== 
@marcosmarxm
Copy link
Contributor

I added a SubStreamSlicer to read all workspaces and read all other endpoints with the stream slicer. Also make pagination to work, now reads the 57 records in worker stream.

@marcosmarxm
Copy link
Contributor

marcosmarxm commented Nov 18, 2022

/publish connector=connectors/source-twilio-taskrouter

🕑 Publishing the following connectors:
connectors/source-twilio-taskrouter
https://github.com/airbytehq/airbyte/actions/runs/3499125417


Connector Did it publish? Were definitions generated?
connectors/source-twilio-taskrouter

if you have connectors that successfully published but failed definition generation, follow step 4 here ▶️

Copy link
Contributor

@marcosmarxm marcosmarxm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Alcadeus0 and @YiyangLi for the review.

@marcosmarxm marcosmarxm merged commit a81c7af into airbytehq:master Nov 18, 2022
@aazam-gh
Copy link
Contributor Author

Thanks @marcosmarxm and @YiyangLi for reviewing and helping me with this PR.
Much appreciated :)

@RealChrisSean
Copy link

@Alcadeus0 can you please update your profile with your email? :)

If you prefer, you can DM me instead with the following:

  • Email
  • Full Name
  • link to this PR

Thanks!

@aazam-gh
Copy link
Contributor Author

@RealChrisSean I've added my email to my profile.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/connectors Connector related issues area/documentation Improvements or additions to documentation bounty bounty-XL Maintainer program: claimable extra large bounty PR community connectors/source/twilio-taskrouter hacktober reward-200

9 participants