6.redis客户端

6.1 Java客户端

添加依赖

1
2
3
4
5
6
7
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>3.0.1</version>
<type>jar</type>
<scope>compile</scope>
</dependency>

测试

1
2
3
4
5
6
7
8
public class RedisTest {

public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1", 6379);
//PING测试
System.out.println(jedis.ping());
}
}

输出

1
PONG

6.2 常用API

set keys

1
2
3
4
5
6
7
8
9
10
public class TestRedisAPI {

public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1", 6379);
//set keys
jedis.set("k1", "v1");
jedis.set("k2", "v2");
jedis.set("k3", "v3");
}
}

get key

1
2
3
4
5
6
7
8
public class TestRedisAPI {

public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1", 6379);
//get key
System.out.println(jedis.get("k1"));
}
}

keys *

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public class TestRedisAPI {

public static void main(String[] args) {
Jedis jedis = new Jedis("127.0.0.1", 6379);
//keys *
Set<String> keys = jedis.keys("*");
//for keys
for (String key : keys) {
System.out.println("key: " + key);
}
//key size
System.out.println("key size: " + keys.size());
}
}

7.redis常用配置

7.1 INCLUDES

1
2
3
4
# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings. Include files can include
# other files, so use this wisely.

翻译

1
在此处包含一个或多个其他配置文件。如果您有一个用于所有Redis服务器的标准模板,但还需要为每个服务器自定义一些设置,则此模板非常有用。include文件可以包括其他文件,因此请明智地使用它。

配置方式

1
2
# include /path/to/local.conf
# include /path/to/other.conf

1.2 NETWORK

主机配置

1
2
3
4
5
6
7
8
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1

翻译

1
默认情况下,如果未指定“bind”配置指令,Redis将侦听服务器上所有可用网络接口的连接。可以使用“bind”配置指令,后跟一个或多个IP地址,只侦听一个或多个选定接口。

端口配置

1
2
3
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379

翻译

1
接受指定端口上的连接,默认值为6379。如果指定端口0,Redis将不在TCP套接字上侦听。

客户端超时关闭

1
2
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

翻译

1
客户端空闲n秒后关闭连接(0表示永不关闭)

1.3 GENERAL

pid文件路径配置

1
2
3
4
5
6
7
8
9
10
# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
pidfile /var/run/redis_6379.pid

翻译

1
2
3
4
如果指定了PID文件,redis会将其写入启动时指定的位置,并在退出时将其删除。
当服务器以非守护进程运行时,如果配置中未指定任何PID文件,则不会创建任何PID文件。
当服务器以守护进程运行时,如果pid文件未被指定,默认会在"/var/run/redis.pid"创建。
redis会尽最大努力创建pid文件,如果没有创建,什么事情也不会发生,服务会正常启动和运行。

指定服务器日志级别

1
2
3
4
5
6
7
# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
loglevel notice

翻译

1
2
3
4
5
指定服务器显示级别
debug(显示尽量多的信息,常用于开发/测试阶段)
verbose(显示少量有用的信息,但不会像debug级别那么多)
notice(适当显示服务器信息,推荐用于生产环境)
warning(只记录非常重要的关键信息)

设置日志文件名

1
2
3
4
# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

翻译

1
2
指定redis生成日志的文件名,如果为空字符串则强制输出到控制台。
如果redis以守护进行方式运行,字符串又为空,日志将会发送到/dev/null

指定redis数据库数量

1
2
3
4
# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
databases 16

翻译

1
设置数据数量,默认数据库为0号库,可以通过SELECT <dbid>指令选择不同的数据库

1.4 SNAPSHOTTING

redis数据持久化

1
2
3
4
5
6
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.

翻译

1
如果给定的秒数和给定的对数据库的写入操作数都发生,则将数据持久化到磁盘上。

指定数据库文件名

1
2
# The filename where to dump the DB
dbfilename dump.rdb

指定数据库文件目录

1
2
3
4
5
6
7
8
9
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

翻译

1
数据文件就写入到指定的工作目录,文件名由dbfilename配置项指定。

1.5 MEMORY MANAGEMENT

设置最大内存限制

1
2
3
4
# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
maxmemory <bytes>

翻译

1
2
将内存使用限制设置为指定的字节数。
当内存使用达到限制,redis会根据所选的最大内存策略对内存中的keys进行移除

最大内存策略

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
# The default is:
#
# maxmemory-policy noeviction

翻译

1
2
3
4
5
6
7
8
9
10
11
12
13
最大内存策略:当使用内存达到限制,redis将会选择要删除的数据。
redis提供了以下五种数据移除策略:
volatile-lru -> 对设置超时时间的key采用LRU算法进行删除
allkeys-lru -> 对所有的key采用LRU算法进行删除
volatile-lfu -> 对设置超时时间的key采用LFU算法进行删除
allkeys-lfu -> 对所有的key采用LFU算法进行删除
volatile-random -> 对设置超时时间的key进行随机删除
allkeys-random -> 对所有的key进行随机删除
volatile-ttl -> 对设置超时时间的key,根据ttl机制排序,将最近要过期的key删除
noeviction -> 如果采用该机制,那么内存数据不会删除,将报错消息返回给用户

LRU:最近最少使用的
LFU:最不长使用的

最大淘汰样本数

1
2
3
4
5
6
7
8
9
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
maxmemory-samples 5

翻译

1
2
3
LRU、LFU和最小TTL算法不是精确的算法,而是近似的算法(为了节省内存),因此您可以根据速度或精度对其进行调整。
对于默认的redis,将选中五个键并选择最近较少使用的键。
默认值5会产生足够好的结果。10非常接近真实的LRU,但需要更多的CPU。3速度更快,但不太准确。

8.redis其他功能

8.1 慢查询

redis请求影响的声明周期

说明

  • 慢查询发生在第3阶段
  • 客户端超时不一定是慢查询,慢查询是客户端超时的一个可能因素

慢查询相关配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# The Redis Slow Log is a system to log queries that exceeded a specified
# execution time. The execution time does not include the I/O operations
# like talking with the client, sending the reply and so forth,
# but just the time needed to actually execute the command (this is the only
# stage of command execution where the thread is blocked and can not serve
# other requests in the meantime).
#
# You can configure the slow log with two parameters: one tells Redis
# what is the execution time, in microseconds, to exceed in order for the
# command to get logged, and the other parameter is the length of the
# slow log. When a new command is logged the oldest one is removed from the
# queue of logged commands.

# The following time is expressed in microseconds, so 1000000 is equivalent
# to one second. Note that a negative number disables the slow log, while
# a value of zero forces the logging of every command.
slowlog-log-slower-than 10000

# There is no limit to this length. Just be aware that it will consume memory.
# You can reclaim memory used by the slow log with SLOWLOG RESET.
slowlog-max-len 128

慢查询是一个固定长度,先进先出的队列,具有两个常用的配置参数

  • slowlog-log-slower-than

    设定慢查询的执行时间,默认是10ms,负数表示不记录慢查询,0表示记录每一条指令

  • slowlog-max-len 128

    设置慢查询存储的条数,默认是128

相关指令

指令 描述
SLOWLOG GET number 查看 slow log
SLOWLOG LEN 查看当前日志的数量
SLOWLOG RESET 清空日志

配置建议

  • slowlog-log-slower-than不要设置过大,默认10ms,通常设置为1ms
  • slowlog-max-len不要设置过小,通常设置1000左右
  • 定期持久化慢查询

8.2 pipline

1次pipline=1次网络时间+n次命令时间

pipline的作用

命令 N个命令操作 1次pipline操作(n个命令)
时间 n次网络+n次命令 1次命令+n次指令
数据量 1次命令 n次命令

注意

  • redis的命令时间是微秒级别
  • pipline每次条数要控制(网络带宽)

代码演示说明

普通操作

1
2
3
4
5
6
7
8
9
10
11
public static void main(String[] args) {
Jedis jedis = new Jedis("192.168.0.110", 6379);

long startTime = System.currentTimeMillis();
for(int i = 0; i < 10000; i++) {
jedis.set("k" + i,i + "");
}
long endTime = System.currentTimeMillis();
long usedTime = (endTime-startTime)/1000;
System.out.println(usedTime + "s");
}

使用pipline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
public static void main(String[] args) {
Jedis jedis = new Jedis("192.168.0.110", 6379);

long startTime = System.currentTimeMillis();
for(int i = 0; i < 100; i++) {
Pipeline pipelined = jedis.pipelined();
for(int j = i * 100; j < (i + 1) * 100; j++) {
pipelined.set("k" + i,i + "");
}
pipelined.sync();
}
long endTime = System.currentTimeMillis();
long usedTime = (endTime-startTime)/1000;
System.out.println(usedTime + "s");
}

pipline与原生m操作的区别

  • m操作是原子性的
  • pipline是非原子性,但会有序返回

使用建议

  • 注意每次pipline携带数据量

8.3 发布订阅与消息队列

redis作为内存数据库,除了能够缓存访问高频数据之外,还可以充当发布订阅与消息队列的角色

发布订阅

发布订阅

订阅

1
subscribe cctv:1

发布

1
publish cctv:1 "helloworld"

消息队列

8.4 Bitmap

Bitmap是一串连续的2进制数字(0或1),每一位所在的位置为偏移(offset),bitmap就是通过最小的单位bit来进行0或者1的设置,表示某个元素对应的值或者状态

get bit

1
2
set k1 big
getbit k1 0

set bit

1
setbit k1 7 1

bitcount key [start end]

获取位图指定范围(start到end,单位为字节,如果不指定就获取全部)位值为1的个数

1
bitcount k1

独立用户统计

数据类型 每个userid占用空间 需要存储的用户量 全部内存量
set 32位 50,000,000 32位 * 50,000,000 = 200MB
Bitmap 1位 100,000,000 1位 * 100,000,000 = 12.5MB
一天 一个月 一年
set 200M 6G 72G
Bitmap 12.5M 375M 4.5G

8.5 HyperLogLog

HyperLogLog 是用来做基数统计的算法,HyperLogLog 的优点是,在输入元素的数量或者体积非常非常大时,计算基数所需的空间总是固定的、并且是很小的

基数就是指一个集合中不同值的数目,基数也可以称之为Distinct Value,简称DV

HyperLogLog 的本质还是字符串

三个指令

向hyperloglog添加元素

1
pfadd key element [element ...]

计算hyperloglog的独立总数

1
pfcount key [key ...]

合并多个hyperloglog

1
pfmerge destkey sourcekey [sourcekey ...]

8.6 geo

geo(地理信息定位): 存储经纬度,计算两地距离,范围计算等

应用场景

  • 微信摇一摇
  • 搜索特定范围内的人或建筑

添加经纬度信息

1
2
geoadd cityGeo 116.405285 39.904989 "北京"
geoadd cityGeo 121.472644 31.231706 "上海"

查找指定key的经纬度信息,可以指定多个key,批量返回

1
2
3
127.0.0.1:6379> geopos cityGeo 北京
1) "116.40528291463851929"
2) "39.9049884229125027"

返回两个地方的距离,可以指定单位,比如米m,千米km,英里mi,英尺ft

1
2
3
4
127.0.0.1:6379> geodist cityGeo 北京 上海
"1067597.9668"
127.0.0.1:6379> geodist cityGeo 北京 上海 km
"1067.5980"

根据给定的经纬度,返回半径不超过指定距离的元素

1
georadius cityGeo 116.405285 39.904989 100 km WITHDIST WITHCOORD ASC COUNT 5
  • 可以指定WITHDIST返回距离,WITHCOORD返回经纬度,WITHHASH返回geohash值
  • 可以指定ASC或DESC,根据距离来排序
  • 可以指定COUNT限定返回的记录数

根据指定的地点查询半径在指定范围内的位置

1
georadiusbymember cityGeo 北京 100 km WITHDIST WITHCOORD ASC COUNT 5

9.redis持久化

由于Redis的数据都存放在内存中,如果没有配置持久化,redis重启后数据就全丢失了,于是需要开启redis的持久化功能,将数据保存到磁盘上,当redis重启后,可以从磁盘中恢复数据。

redis提供两种方式进行持久化

  • RDB(Redis DataBase)持久化
  • AOF(Append Only File)持久化

Fork

一个进程,包括代码、数据和分配给进程的资源。

fork()函数通过系统调用创建一个与原来进程几乎完全相同的进程,也就是两个进程可以做完全相同的事,但如果初始参数或者传入的变量不同,两个进程也可以做不同的事。

一个进程调用fork()函数后,系统先给新的进程分配资源,例如存储数据和代码的空间。然后把原来的进程的所有值都复制到新的新进程中,只有少数值与原来的进程的值不同,相当于克隆了一个自己。

9.1 RDB持久化

RDB持久化,其实就是类似系统快照的功能

触发机制

  • save(同步)
  • bgsave(异步)
  • 自动

save指令

bgsave指令

save vs bgsave

命令 save bgsave
IO类型 同步 异步
阻塞 是(阻塞发生在fork)
复杂度 O(n) O(n)
优点 不会消耗额外内存 不阻塞客户端命令
缺点 阻塞客户端命令 需要fork,消耗内存

自动

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
# save <seconds> <changes>
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""

save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes

# The filename where to dump the DB
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./

推荐配置

  • dbfilename dump-${port}.rdb
  • dir /bigdiskpath
  • stop-writes-on-bgsave-error yes
  • rdbcompression yes
  • rdbchecksum yes

其他触发机制

  • 全量复制(主从模式)
  • debug reload(提供debug级别的重启,不清空内存的一种重启,这种方式也会触发RDB文件的生成)
  • shutdown

9.2 AOF持久化

Redis 默认不开启。它的出现是为了弥补RDB的不足(数据的不一致性),所以它采用日志的形式来记录每个写操作,并追加到文件中。Redis 重启会根据日志文件的内容将写指令从前到后执行一次以完成数据的恢复工作。

aof运行原理

aof恢复原理

aof配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.

appendonly no

# The name of the append only file (default: "appendonly.aof")

appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

# appendfsync always
appendfsync everysec
# appendfsync no

aof的三种策略

aof3

三种策略对比

命令 always everysec no
优点 不丢失数据 每秒一次fsync 不可控
缺点 IO开销大 丢1秒数据 不可控

aof重写

aof重写的作用

  • 减少磁盘用量
  • 加速恢复速度

aof实现重写的两种方式

  • bgrewriteaof命令
  • aof重写配置

bgrewriteaof运行原理

aof重写配置

配置 描述
auto-aof-rewrite-min-size AOF文件重写需要的尺寸
auto-aof-rewrite-percentage AOF文件增长率
文件统计 描述
aof_current_size AOF当前尺寸(单位: 字节)
aof_base_size AOF上次启动和重写的尺寸(单位: 字节)

自动配置触发时机(同时满足)

  • aof_current_size > auto-aof-rewrite-min-size
  • aof_current_size - aof_base_size / aof_base_size > auto-aof-rewrite-percentage

推荐配置

  • appendonly yes
  • appendfilename “appendonly-${port}.aof”
  • dir /bigdiskpath
  • no-appendfsync-on-rewrite yes
  • auto-aof-rewrite-percentage 100
  • auto-aof-rewrite-min-size 64mb

9.3 RDB vs AOF

命令 RDB AOF
启动优先级
体积
恢复速度
数据安全性 丢数据 根据策略决定
轻重

RDB最佳策略

  • “关”
  • 集中管理
  • 主从,从开

AOF最佳策略

  • AOF重写集中管理
  • everysec

10.事务

redis事务:可以一次执行多条指令,本质是一组指令的集合。一个事务中的所有指令都会序列化,按顺序的串行化的执行,该操作时原子性的,要么全部成功,要么全部失败

10.1 事务指令

  • redis事务指令

    指令 说明
    MULTI 标记事务的开始
    EXEC 执行事务中的所有的指令
    DISCARD 取消事务,放弃执行事务中的所有指令
    WATCH key [key …] 监控一个(或多个)key,如果在事务执行之前这个(或这些)key被其他命令所改动,事务将被打断
    UNWATCH 取消WATCH指令对所有key的监控

    指令式错误

    运行时错误

10.2 WATCH监控

悲观锁

总是假设最坏的情况,每次去拿数据的时候都认为别人会修改,所以每次在拿数据的时候都会上锁,这样别人想拿这个数据就会阻塞直到它拿到锁。(共享资源每次只给一个线程使用,其它线程阻塞,用完后再把资源转让给其它线程)。传统的数据库就用到了很多这种机制,比如表锁、行锁、读锁、写锁等,都是在操作之前先上锁。

乐观锁

总是假设最好的情况,每次去拿数据的时候都认为别人不会修改,所以不会上锁,但是在更新的时候会判断一下在此期间别人有没有去更新这个数据,可以使用版本号等机制。乐观锁适用于多读的应用类型,这样可以提高吞吐量。

乐观锁策略:提交版本必须大于记录当前版本才执行更新

CAS(Compare and Swap 比较并交换)

CAS是乐观锁技术,当多个线程尝试使用CAS同时更新同一个变量时,只有其中一个线程能更新变量的值,而其它线程都失败,失败的线程并不会被挂起,而是被告知这次竞争中失败,并可以再次尝试。

WATCH监控

在监控一个(或多个key)时,在key的操作的过程中,其他客户端对监控的key进行了篡改,那么WATCH的key的操作将失败。

监控key

提前篡改balance

对watch的key进行修改

如果结果返回(nil),则说明key值修改失败

总结:

  1. watch指令类似乐观锁,事务提交时,如果key值已被其他客户端修改,则整个事务队列都不会被执行
  2. 在watch对key进行监控之后,有任何key的变化都会导致exec指令执行的事务失效,并返回(nil)

10.3 jedis事务

1
2
3
4
5
6
7
8
9
10
11
12
13
public static void main(String[] args) throws InterruptedException {
Jedis jedis = new Jedis("192.168.123.128", 6379);
List<Object> exec = null;
while (exec == null) {
System.out.println(jedis.watch("k1"));
Transaction transaction = jedis.multi();
Thread.sleep(3000);
transaction.set("k1", "v2");
exec = transaction.exec();
System.out.println(exec);
Thread.sleep(2000);
}
}