Skip to content

Conversation

thibaultcha
Copy link
Member

@thibaultcha thibaultcha commented Jan 8, 2018

I hereby granted the copyright of the changes in this pull request
to the authors of this lua-nginx-module project.

Sister PR for openresty/lua-resty-core#165

ngx_http_lua_ffi_shdict_incr(ngx_shm_zone_t *zone, u_char *key,
size_t key_len, double *value, char **err, int has_init, double init,
int *forcible)
int exptime, int *forcible)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be init_ttl to avoid any ambiguities here. Also, we should use long type here. int is 32-bit usually and can easily get overflown.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a saner name to me 👍

Should we do the same modification to this value's type to the already existing implementations of it? Like ngx_http_lua_ffi_shdict_set_expire, ngx_http_lua_ffi_shdict_store, etc... In another PR of course.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thibaultcha No, they do not have such ambiguities as the incr() method.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I mean is that they do use the int type as well (I agree their naming is fine).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initial comment addressed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thibaultcha Oh, right. Better in a separate PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@thibaultcha Thanks for the catch!

@thibaultcha thibaultcha force-pushed the feat/incr-exptime-ffi branch from 9626013 to 944d4a1 Compare January 9, 2018 02:56
@agentzh
Copy link
Member

agentzh commented Jan 10, 2018

@thibaultcha I've already merged this. Will you create another PR to document this feature properly? Thanks!

@agentzh agentzh closed this Jan 10, 2018
@thibaultcha thibaultcha deleted the feat/incr-exptime-ffi branch January 10, 2018 01:51
@thibaultcha
Copy link
Member Author

Will do

thibaultcha added a commit to Kong/kong that referenced this pull request Mar 17, 2018
Addresses the issue discussed in #3124 and #3241. This is part of a series of fixes to address those errors. Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 20, 2018
Addresses the issue discussed in #3124 and #3241. This is part of a series of fixes to address those errors. Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 26, 2018
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
thibaultcha added a commit to Kong/kong that referenced this pull request Mar 28, 2018
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 23, 2018
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
thibaultcha added a commit to Kong/kong that referenced this pull request Apr 24, 2018
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants