Add Cache-aside section
This commit is contained in:
parent
1945024e5b
commit
adda68c28b
39
README.md
39
README.md
@ -1273,3 +1273,42 @@ Suggestions of what to cache:
|
|||||||
* Fully rendered web pages
|
* Fully rendered web pages
|
||||||
* Activity streams
|
* Activity streams
|
||||||
* User graph data
|
* User graph data
|
||||||
|
|
||||||
|
### When to update the cache
|
||||||
|
|
||||||
|
Since you can only store a limited amount of data in cache, you'll need to determine which cache update strategy works best for your use case.
|
||||||
|
|
||||||
|
#### Cache-aside
|
||||||
|
|
||||||
|
<p align="center">
|
||||||
|
<img src="http://i.imgur.com/ONjORqk.png">
|
||||||
|
<br/>
|
||||||
|
<i><a href=http://www.slideshare.net/tmatyashovsky/from-cache-to-in-memory-data-grid-introduction-to-hazelcast>Source: From cache to in-memory data grid</a></i>
|
||||||
|
</p>
|
||||||
|
|
||||||
|
The application is responsible for reading and writing from storage. The cache does not interact with storage directly. The application does the following:
|
||||||
|
|
||||||
|
* Look for entry in cache, resulting in a cache miss
|
||||||
|
* Load entry from the database
|
||||||
|
* Add entry to cache
|
||||||
|
* Return entry
|
||||||
|
|
||||||
|
```
|
||||||
|
def get_user(self, user_id):
|
||||||
|
user = cache.get("user.{0}", user_id)
|
||||||
|
if user is None:
|
||||||
|
user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
|
||||||
|
if user is not None:
|
||||||
|
cache.set(key, json.dumps(user))
|
||||||
|
return user
|
||||||
|
```
|
||||||
|
|
||||||
|
[Memcached](https://memcached.org/) is generally used in this manner.
|
||||||
|
|
||||||
|
Subsequent reads of data added to cache are fast. Cache-aside is also referred to as lazy loading. Only requested data is cached, which avoids filling up the cache with data that isn't requested.
|
||||||
|
|
||||||
|
##### Disadvantage(s): cache-aside
|
||||||
|
|
||||||
|
* Each cache miss results in three trips, which can cause a noticeable delay.
|
||||||
|
* Data can become stale if it is updated in the database. This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
|
||||||
|
* When a node fails, it is replaced by a new, empty node, increasing latency.
|
||||||
|
Loading…
Reference in New Issue
Block a user