I'm something of a rookie, so apologies if I'm missing something obvious since my research hasn't led me to much relating to our somewhat unique situation.
I'm going to be transitioning a php app to cloud-based servers which is currently running on a single server using memcached on localhost to store extremely small amounts of data per user (not session data, just high-accessibility transitional data that is user-specific). We're trying to make this scalable since our current web server is beginning to cap out concurrent apache connections during rare times of peak load.
We'll be running the cloud web servers through a load balancer with session stickiness, but I'm stymied in deciding whether to adapting the app to connect to memcached on a stand-alone dedicated server so the memory pool can be shared by all the web servers, or maintain a portion of each cloud server's memory for memcached and keep the application pointed at localhost, trusting the session stickiness to ensure that the server maintaining the session will also maintain the user data in its memory.
My current thinking is that a dedicated memcached server would be a cleaner implementation, but possibly more complicated to scale in the longer run if we end up expanding our usage of memcached to more complex data. Whereas maintaining instances of memcached running on each cloud server would introduce more resources whenever additional servers are needed (again, assuming the server maintaining the user session could reliably look up the cached user data).
I'd value anyone's opinion, insight, or pointing out any flaws in my comprehension.