1

I'm something of a rookie, so apologies if I'm missing something obvious since my research hasn't led me to much relating to our somewhat unique situation.

I'm going to be transitioning a php app to cloud-based servers which is currently running on a single server using memcached on localhost to store extremely small amounts of data per user (not session data, just high-accessibility transitional data that is user-specific). We're trying to make this scalable since our current web server is beginning to cap out concurrent apache connections during rare times of peak load.

We'll be running the cloud web servers through a load balancer with session stickiness, but I'm stymied in deciding whether to adapting the app to connect to memcached on a stand-alone dedicated server so the memory pool can be shared by all the web servers, or maintain a portion of each cloud server's memory for memcached and keep the application pointed at localhost, trusting the session stickiness to ensure that the server maintaining the session will also maintain the user data in its memory.

My current thinking is that a dedicated memcached server would be a cleaner implementation, but possibly more complicated to scale in the longer run if we end up expanding our usage of memcached to more complex data. Whereas maintaining instances of memcached running on each cloud server would introduce more resources whenever additional servers are needed (again, assuming the server maintaining the user session could reliably look up the cached user data).

I'd value anyone's opinion, insight, or pointing out any flaws in my comprehension.

2
  • 1
    How much traffic (both throughput and and reqs/s) to the memcached do you do? How important is latency to you? Are you memory-constrained on the webservers (will the data that needs to be memcached easily fit into webserver's memory)? Are you planning on any failover scenario? If so, can your app handle the data not being in cache (it should!)? Commented Apr 20, 2013 at 9:19
  • @Fox I don't have specifics on traffic unfortunately, but latency isn't much of a concern. We're essentially using it to monitor a user's simultaneous connections to video resources. It's a very small amount of data per user, we'd hit a limit on concurrent apache connections on the single web server before coming close to a memory ceiling. The app handles gracefully if it can't find the data, so there's no concern there. Thank you for your consideration and help; let me know what you think or if I can offer any more info. Commented Apr 24, 2013 at 20:06

1 Answer 1

1

I'd say both ways are correct, but I slightly lean toward using per-server memcached. As long as you have no shared data in memcached.

With shared memcached you get:

  • higher latency
  • higher LAN traffic
  • cached data stability in case of web server fialure (server that takes over still has all the info)
  • you lose all cached data in case of memcache server failure (this could be the only concern I see)
    • you can run two, but that cuts in the budget

With per server memcached you get

  • lower latency
  • lower traffic
  • in case of failover to another server, there is no old data
  • but in case one server fails, no other server loses anything
  • and it's cheaper
1
  • Per-server is definitely sounding like the way to go. No single point of failure, and in a case where a node goes down the user would lose their session data anyhow, requiring them to re-authenticate and re-establish the memcache data, which is actually what we'd want in our situation. Thanks for the feedback! Commented Apr 26, 2013 at 16:14

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.