azure - How to keep Redis data accurate with what's in SQL Server -
i don't have experience scaling-out sql back-end, i've read far, sharding writes , caching reads seem 2 of common practices. i'm trying learn how eventual consistency minimized right caching strategy.
i'd use azure sql database, entity framework & elastic scale middleware, , redis testing purposes.
is there way commit distributed transaction both sql server , redis?
if not, what's effective way ensure read freshness when database change occurs?
i write sql , update cache in same api, writing cache might fail whatever reason. implement retry logic, assuming attempts fail, try rollback sql transaction or serve old cache data clients , periodically rebuild cache catch database. of course, latter mean data reads not consistent period of time. evicting data , reading sql cluster option, cross-shard queries might expensive, when involve complex joins , have hundreds, if not thousands, of databases on commodity hardware.
your idea in last part of post-- write sql , update cache in same api-- seems reasonable. i'd change slightly: let cache have low reasonable ttl, 1 minute. update cache on reads when db hit, on next db query cache hit instead.
pros:
- past 1min mark, know users getting right data.
- fault-tolerant: if there's wrong cache , can't update whatever reason, next query goes db instead , client still gets correct data.
cons:
- you'll have more reads against db (though 1 read per key per minute shouldn't big deal).
- clients old data 1 minute (at most-- less) past update.
Comments
Post a Comment