Skip to main content
replaced http://meta.stackoverflow.com/ with https://meta.stackoverflow.com/
Source Link

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to useuse lotslots ofof UnicodeUnicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Update: The suggested_buffer_size limits/properties were removed from the version 2.0+ API. They caused way too many headaches for not enough gain.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Update: The suggested_buffer_size limits/properties were removed from the version 2.0+ API. They caused way too many headaches for not enough gain.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Update: The suggested_buffer_size limits/properties were removed from the version 2.0+ API. They caused way too many headaches for not enough gain.

The solution changed with API-2. Incorporated Kevin's comments into the answer.
Source Link

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits asWhat we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Update: The suggested_buffer_size inlimits/properties were removed from the documentation. These are, at best, hintsversion 2. Any code you write must be able to handle larger data if its returned0+ API. This code will get pushed into production later tonight, in all likelihoodThey caused way too many headaches for not enough gain.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Update: The suggested_buffer_size limits/properties were removed from the version 2.0+ API. They caused way too many headaches for not enough gain.

deleted 6 characters in body
Source Link
Kevin Montrose
  • 18.6k
  • 6
  • 36
  • 62

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is documentingputting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write shouldmust be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is documenting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write should be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Alright, here's what we're going to try.

First off, we're not giving hard guarantees as to the maximum length of strings returned. Such a guarantee could create a situation where we might have to release an API revision just to accommodate increased text limits on the trilogy sites, or one - which is equally bad - where the API would return truncated results until the next "natural" revision.

Furthermore, spelling out exact character limits brings us into the realm of Unicode dragons; which, frankly, is asking alot of developers. This complexity can be seen full force in Twitter's API. People do like to use lots of Unicode, so it'd be a more common problem than you might think.

What we are doing is putting our current limits as suggested_buffer_size in the documentation. These are, at best, hints. Any code you write must be able to handle larger data if its returned. This code will get pushed into production later tonight, in all likelihood.

Source Link
Kevin Montrose
  • 18.6k
  • 6
  • 36
  • 62
Loading