目录

Ceph-RadosGW-S3

概述

本文主要介绍 Ceph 对象存储网关的重要组件 Ceph Rados GateWay

名词解释

  1. RADOS: Reliable Autonomic Distributed Object Store可靠自动分布式对象存储
  2. OSD: Object Storage Device对象存储设备
  3. MOB: Ceph Monitor
  4. librados: 访问RADOS的库
  5. RBD: Rados Block Device RADOS块设备
  6. RGW: Rados Gate Way RADOS网关
  7. MDS: Meta Data Service Ceph元数据服务器
  8. CephFS: Ceph File System Ceph文件系统
  9. Pool: 池,RADOS以对象的形式将数据存储在池中,用来存储对象的逻辑分区
1
2
3
rados lspools
rados -p metadata ls
rados df

OSD 是一个守护进程。

1
2
3
4
5
service ceph status osd
service ceph -a status osd
ceph osd ls
ceph osd stat
ceph osd tree

MON 也是一个轻量级的守护进程。

1
2
3
4
5
6
7
8
9
ceph mon dump
ceph osd dump
ceph pg dump
ceph osd crush dump
ceph mds dump
service ceph status mon
ceph mon stat
ceph mon_status
ceph mon dump

MDS 是元数据服务器,只有 CephFS 有用。

PG placement groups 是一种对象的逻辑组合

1
2
ceph osd pool get data pg_num
ceph osd pool get data pgp_num

用ceph-deploy部署Ceph集群

1
2
3
4
5
6
7
8
ceph-deploy new ceph-node1
ceph-deploy install ceph-node1 ceph-node2 ceph-node3
ceph-deploy mon create ceph-node1
ceph-deploy gatherkeys ceph-node1
ceph-deploy mon create-initial
ceph-deploy mon create ceph-node2
ceph-deploy mon create ceph-node3
ceph-deploy disk zap ceph-node1:sbd ceph-node1:sdc ceph-node1:sdd

CephFS

POSIX 兼容的分布式文件系统,使用 Ceh RADOS 存储数据,需要同时部署 MDS。使用 CephFS 有两种方式,使用本地内核驱动程序挂载 CephFS,或者使用 Ceph FUSE。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
uname -r
mkdir /mnt/kernel_cephfs
cat /etc/ceph/ceph.client.admin.keyring
mount -t ceph 192.168.57.101:6789:/ /mnt/kernel_cephfs -o name=admin,secret=xxxxx
df -h
unmount /mnt/kernel_cephfs
mount /mnt/kernel_cephfs
yum install ceph-fuse
mkdir /mnt/cephfs
ceph-fuse -m 192.168.57.101:6789 /mnt/cephfs
unmount /mnt/cephfs
mount /mnt/cephfs

Ceph RGW

在 Ceph 集群,RADOS 网关会配置在于 MON 和 OSD 不同的机器删,也可以用 MON 机器来配置 RGW。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
VBoxManage createvm --name ceph-rgw --ostype RedHat_64 --register
VBoxManage modifyvm ceph-rgw --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1
VBoxManage storagectl ceph-rgw --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on
VBoxManage createhd --filename OS-ceph-rgw.vdi --size 10240
VBoxManage storageattach ceph-rgw --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on
VBoxManage startvm ceph-rgw --type gui

radosgw-admin user create --uid=mona --dsiplay-name="Monika" --email=mona@example.com

radosgw-admin caps add --uid=mona --caps="users=*"
radosgw-admin caps add --uid=mona --caps="buckets=*"
radosgw-admin caps add --uid=mona --caps="metadata=*"
radosgw-admin caps add --uid=mona --caps="zone=*"

s3cmd ls
s3cmd mb s3://first-bucket
s3cmd put /etc/hosts s3://first-bucket

Ceph RGW部署

测试集群的部署。

1
2
# ceph -v
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

没有安装 radosgw。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# ps -ef | grep ceph
ceph       30636        2018 ?        1-08:42:37 /usr/bin/ceph-osd -f --cluster ceph --id 14 --setuser ceph --setgroup ceph
ceph       30757        2018 ?        1-08:06:57 /usr/bin/ceph-osd -f --cluster ceph --id 19 --setuser ceph --setgroup ceph
ceph       30790        2018 ?        1-10:04:58 /usr/bin/ceph-osd -f --cluster ceph --id 17 --setuser ceph --setgroup ceph
ceph       30797        2018 ?        1-04:42:24 /usr/bin/ceph-osd -f --cluster ceph --id 16 --setuser ceph --setgroup ceph
ceph       30798        2018 ?        1-12:16:49 /usr/bin/ceph-osd -f --cluster ceph --id 10 --setuser ceph --setgroup ceph
ceph       30799        2018 ?        1-11:52:51 /usr/bin/ceph-osd -f --cluster ceph --id 18 --setuser ceph --setgroup ceph
ceph       30816        2018 ?        1-14:37:58 /usr/bin/ceph-osd -f --cluster ceph --id 12 --setuser ceph --setgroup ceph
ceph       30817        2018 ?        1-08:24:16 /usr/bin/ceph-osd -f --cluster ceph --id 15 --setuser ceph --setgroup ceph
ceph       30818        2018 ?        1-13:32:57 /usr/bin/ceph-osd -f --cluster ceph --id 9 --setuser ceph --setgroup ceph
ceph       30827        2018 ?        1-10:57:33 /usr/bin/ceph-osd -f --cluster ceph --id 11 --setuser ceph --setgroup ceph
ceph       30828        2018 ?        1-11:44:25 /usr/bin/ceph-osd -f --cluster ceph --id 20 --setuser ceph --setgroup ceph
ceph       30856        2018 ?        1-12:29:54 /usr/bin/ceph-osd -f --cluster ceph --id 13 --setuser ceph --setgroup ceph
ceph       50161        2018 ?        1-23:52:38 /usr/bin/ceph-mgr -f --cluster ceph --id ceph-test1 --setuser ceph --setgroup ceph
ceph      818952        2019 ?        12:19:36 /usr/bin/ceph-mon -f --cluster ceph --id ceph-test1 --setuser ceph --setgroup ceph

查看 Ceph 的配置。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# cat ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
client_reconnect_stale = true
fsid = 531938b9-55d0-4a96-bdad-a767a58cf509
mon_host = 100.112.28.70, 100.112.28.71, 100.112.28.72
mon_initial_members = ceph-test1, ceph-test2, ceph-test3
mon_warn_on_legacy_crush_tunables = false
mon_pg_warn_max_per_osd = 500
rbd_default_features = 3
ms_type = async

debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0
[osd]
filestore_fd_cache_shards = 32
filestore_fd_cache_size = 1024

filestore_max_sync_interval = 10
filestore_min_sync_interval = 5

filestore_queue_committing_max_bytes = 1048576000
filestore_queue_committing_max_ops = 5000
filestore_queue_max_bytes = 1048576000
filestore_queue_max_ops = 5000

filestore_wbthrottle_enable = false
filestore_xattr_use_omap = true

journal_max_write_bytes = 1048576000
journal_max_write_entries = 3000
journal_queue_max_bytes = 1048576000
journal_queue_max_ops = 3000

filestore_op_threads = 4
filestore_ondisk_finisher_threads = 1
filestore_apply_finisher_threads = 1

ms_dispatch_throttle_bytes = 1048576000
objecter_inflight_op_bytes = 1048576000

osd_op_threads = 4
osd_op_shard_threads = 4
osd_op_num_shards = 8
osd_disk_threads = 1
osd_mount_options_xfs = rw, noatime, nobarrier, inode64
osd_journal_size = 10240
osd_max_write_size = 5120
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
osd_crush_location_hook = /data/ceph/ceph_osd_location.pl

补充一些 rgw 的配置。

1
2
3
4
5
6
[client.radosgw.gateway]
host = cephadmin
keyring = /etc/ceph/ceph.client.radosgw.keyring
log file = /var/log/ceph/client.radosgw.gateway.log
rgw_frontends =civetweb port=80
rgw print continue = false

执行命令启动 rgw。

1
radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway 

查看没有找到 Ceph 的进程,看一下日志。

1
2
3
4
5
6
2020-05-02 10:37:32.682688 7f84576ff000  0 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable), process radosgw, pid 728130
2020-05-02 10:37:32.683312 7f84576ff000  0 pidfile_write: ignore empty --pid-file
2020-05-02 10:37:32.707860 7f84576ff000 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.radosgw.keyring: (2) No such file or directory
2020-05-02 10:37:32.707896 7f84576ff000 -1 monclient: ERROR: missing keyring, cannot use cephx for authentication
2020-05-02 10:37:32.707904 7f84576ff000  0 librados: client.radosgw.gateway initialization error (2) No such file or directory
2020-05-02 10:37:32.708215 7f84576ff000 -1 Couldn't init storage provider (RADOS)

radosgw 创建用户和 Keyring,为网关服务器创建 Keyring:

1
2
ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
chmod +r /etc/ceph/ceph.client.radosgw.keyring

为网关实例 client.radosgw.gateway 生成一个名称和 Key:

1
ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.gateway --gen-key

将 key 添加到 ceph storage cluster:

1
ceph-authtool -n client.radosgw.gateway --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring

将key添加到ceph storage cluster:

1
ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring

创建 rgw 相关的pool,如果集群里本身有这个 pool 就可以放弃。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
ceph osd pool create .rgw 128 128
ceph osd pool create .rgw.root 128 128
ceph osd pool create .rgw.control 128 128
ceph osd pool create .rgw.gc 128 128
ceph osd pool create .rgw.buckets 128 128
ceph osd pool create .rgw.buckets.index 128 128
ceph osd pool create .log 128 128
ceph osd pool create .intent-log 128 128
ceph osd pool create .usage 128 128
ceph osd pool create .users 128 128
ceph osd pool create .users.email 128 128
ceph osd pool create .users.swift 128 128
ceph osd pool create .users.uid 128 128

启动失败

1
radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway 

日志如下。

1
2
3
4
2020-05-02 11:02:25.916003 7f1a7ea0d000  0 ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable), process radosgw, pid 739382
2020-05-02 11:02:25.916555 7f1a7ea0d000  0 pidfile_write: ignore empty --pid-file
2020-05-02 11:02:25.942032 7f1a7ea0d000  0 librados: client.radosgw.gateway authentication error (22) Invalid argument
2020-05-02 11:02:25.942423 7f1a7ea0d000 -1 Couldn't init storage provider (RADOS)
1
ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.gateway -i /etc/ceph/ceph.client.radosgw.keyring

S3集群管理

Ceph 命令。

  1. status
  2. start
  3. stop
  4. restart
  5. forcestop

Ceph 守护进程。

  1. mon
  2. osd
  3. mds
  4. ceph-radosgw

Ceph监控

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
ceph health
ceph health detail
ceph -w
ceph -w --watch-debug
ceph -w --watch-info
ceph -w --watch-warn
ceph df
ceph status
ceph -s

ceph mon stat
ceph mon dump
ceph quorum_status

ceph osd tree
ceph osd dump
ceph osd blacklist ls
ceph osd crush dump
ceph osd crush rule list
ceph osd rule dump <crush_rule_name>

ceph pg stat
ceph pg dump
ceph pg 2.7d query
ceph pg dum_stuck unclean

ceph mds stat
ceph mds dump

# RADOS bench
rados bench -p data 10 write --no-cleanup
rados bench -p data 10 seq
rados bench -p data 10 rand

s3cfg 如下。

1
2
3
4
5
6
7
8
# cat .s3cfg
[default]
access_key = gameai-4004ecec
secret_key = gameai-e09b8d0f
host_base = gameaishradosgw.cephrados.so.db:7480
host_bucket = gameaishradosgw.cephrados.so.db7480/%(bucket)
use_https = False
send_chunk = 262144

测试集群地址

/ceph-radosgw-s3/image_1e6ig4dpgibvkulltt1fljgbr2j.png
1
2
3
100.112.28.70
100.112.28.71
100.112.28.72
/ceph-radosgw-s3/image_1e6ih2cmrrei1t921a017h1o030.png

关于 Exporter,目前最需要的可能是 Rados 网关的监控,所以先解决这个问题。

  1. radosgw_usage_exporter这个可能不够用,但是先尝试
  2. 加个Nginx,这个可能复杂点,现在是用默认的civetweb来部署的

kevin 提出的几个问题。

9.81.3.100 这台机器 S3的客户端口,现在配置的是gamesafe的。可以直接用 100.117.152.2 这个是mds
radosgw-admin bucket stats –uid=gamesafe |egrep ‘size_kb_actual|num_objects’

GameAI 的 Ceph 集群配置文件

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
# cat ceph.conf
[global]
auth_service_required = cephx
auth_client_required = cephx
auth_cluster_required = cephx
client_reconnect_stale = true
mon_host = 9.25.178.36, 9.25.9.213, 9.25.9.214, 9.2.175.101, 9.2.176.14, 9.25.177.220, 100.96.8.101, 100.112.148.246, 100.112.34.252, 9.25.178.35, 9.24.17.20, 100.95.24.134, 100.95.20.250, 9.51.7.89
mon_initial_members = sh-storage-node1, sh-storage-node2, sh-storage-node3, sh-storage-node4, sh-storage-node5, sh-storage-node6, sh-storage-node8, sh-storage-node10, sh-storage-node9, sh-storage-node7, sh-storage-node11, sh-storage-node13, sh-storage-node12, sh-storage-node14
fsid = 2e362b27-4010-4051-9ecd-b4982b873326
mds_log_max_expiring = 300
mon_warn_on_legacy_crush_tunables = false
mon_pg_warn_max_per_osd = 500
ms_type = async
rbd_default_features = 3

debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0
[mds]
debug_mds = 2/5
mds_cache_size = 10000000
mds_reconnect_timeout = 300
mds_beacon_grace = 300
mds_log_max_segments = 300
mds_revoke_cap_timeout = 180
mds_standby_replay = true
mds_max_purge_files = 8192
mds_max_purge_ops_per_pg = 10
mds_cache_memory_limit = 20G
[client.radosgw.gateway]
rgw_thread_pool_size = 1200
rgw frontends = "civetweb port=7480"

rgw 的池 pool

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# rados lspools
rbd
.rgw.root
default.rgw.control
default.rgw.data.root
default.rgw.gc
default.rgw.log
default.rgw.users.uid
default.rgw.users.keys
cephfs_data
cephfs_metadata
default.rgw.buckets.index
default.rgw.buckets.non-ec
default.rgw.buckets.data
default.rgw.users.email
default.rgw.usage
default.rgw.meta

rgw 的模板配置,重点参考官方文档

1
2
3
4
5
6
7
[client.radosgw.gateway]
rgw frontends=fastcgi socket_port=9000 socket_host=0.0.0.0
host=c1
keyring=/etc/ceph/ceph.client.radosgw.keyring
log file=/var/log/radosgw/client.radosgw.gateway.log
rgw print continue=false
rgw content length compat = true

测试集群的 ceph.conf 配置文件。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
# cat ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
client_reconnect_stale = true
fsid = 531938b9-55d0-4a96-bdad-a767a58cf509
mon_host = 100.112.28.70, 100.112.28.71, 100.112.28.72
mon_initial_members = ceph-test1, ceph-test2, ceph-test3
mon_warn_on_legacy_crush_tunables = false
mon_pg_warn_max_per_osd = 500
rbd_default_features = 3
ms_type = async

debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_filer = 0/0
debug_objecter = 0/0
debug_rados = 0/0
debug_rbd = 0/0
debug_journaler = 0/0
debug_objectcatcher = 0/0
debug_client = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
debug_mon = 0/0
debug_paxos = 0/0
debug_rgw = 0/0
[osd]
filestore_fd_cache_shards = 32
filestore_fd_cache_size = 1024

filestore_max_sync_interval = 10
filestore_min_sync_interval = 5

filestore_queue_committing_max_bytes = 1048576000
filestore_queue_committing_max_ops = 5000
filestore_queue_max_bytes = 1048576000
filestore_queue_max_ops = 5000

filestore_wbthrottle_enable = false
filestore_xattr_use_omap = true

journal_max_write_bytes = 1048576000
journal_max_write_entries = 3000
journal_queue_max_bytes = 1048576000
journal_queue_max_ops = 3000

filestore_op_threads = 4
filestore_ondisk_finisher_threads = 1
filestore_apply_finisher_threads = 1

ms_dispatch_throttle_bytes = 1048576000
objecter_inflight_op_bytes = 1048576000

osd_op_threads = 4
osd_op_shard_threads = 4
osd_op_num_shards = 8
osd_disk_threads = 1
osd_mount_options_xfs = rw, noatime, nobarrier, inode64
osd_journal_size = 10240
osd_max_write_size = 5120
osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
osd_crush_location_hook = /data/ceph/ceph_osd_location.pl

civetweb 的日志例子。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
2020-04-23 03:37:31.937757 7f4b5a060700  1 civetweb: 0x7f4d01921000: 100.102.33.154 - - [23/Apr/2020:03:37:31 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:37:31.999280 7f4b5a060700  1 civetweb: 0x7f4d01921000: 100.102.33.154 - - [23/Apr/2020:03:37:31 +0800] "GET /gamesafe/model_config/cf_video/cfg/imagenet22k.dataset HTTP/1.1" 200 0 - -
2020-04-23 03:37:37.359241 7f4b12fd2700  1 civetweb: 0x7f4d01c11000: 100.102.33.154 - - [23/Apr/2020:03:37:37 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:37:37.424417 7f4b12fd2700  1 civetweb: 0x7f4d01c11000: 100.102.33.154 - - [23/Apr/2020:03:37:37 +0800] "GET /gamesafe/model_config/cf_video/cfg/yolov2.cfg HTTP/1.1" 200 0 - -
2020-04-23 03:38:10.105217 7f4c34a15700  1 civetweb: 0x7f4d0104a000: 100.88.65.203 - - [23/Apr/2020:03:38:10 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:10.211646 7f4c34a15700  1 civetweb: 0x7f4d0104a000: 100.88.65.203 - - [23/Apr/2020:03:38:10 +0800] "DELETE /gamesafe/tochecktar_done/cf_video_202004221647210126.tar HTTP/1.1" 204 0 - -
2020-04-23 03:38:19.408272 7f4b850b6700  1 civetweb: 0x7f4d0175f000: 100.88.65.203 - - [23/Apr/2020:03:38:19 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:19.462100 7f4b850b6700  1 civetweb: 0x7f4d0175f000: 100.88.65.203 - - [23/Apr/2020:03:38:19 +0800] "GET /gamesafe/model_config/cf_video/cfg/cifar.test.cfg HTTP/1.1" 200 0 - -
2020-04-23 03:38:21.757550 7f4b990de700  1 civetweb: 0x7f4d0168f000: 100.88.65.203 - - [23/Apr/2020:03:38:21 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:21.814748 7f4b990de700  1 civetweb: 0x7f4d0168f000: 100.88.65.203 - - [23/Apr/2020:03:38:21 +0800] "GET /gamesafe/model_config/cf_video/cfg/extraction.conv.cfg HTTP/1.1" 200 0 - -
2020-04-23 03:38:23.393977 7f4bda160700  1 civetweb: 0x7f4d013f1000: 100.88.65.203 - - [23/Apr/2020:03:38:23 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:23.453021 7f4bda160700  1 civetweb: 0x7f4d013f1000: 100.88.65.203 - - [23/Apr/2020:03:38:23 +0800] "GET /gamesafe/model_config/cf_video/cfg/imagenet22k.dataset HTTP/1.1" 200 0 - -
2020-04-23 03:38:24.524307 7f4aa96ff700  1 civetweb: 0x7f4d02060000: 100.88.65.203 - - [23/Apr/2020:03:38:24 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:24.584544 7f4aa96ff700  1 civetweb: 0x7f4d02060000: 100.88.65.203 - - [23/Apr/2020:03:38:24 +0800] "GET /gamesafe/model_config/cf_video/cfg/resnet50.cfg HTTP/1.1" 200 0 - -
2020-04-23 03:38:27.105492 7f4bac104700  1 civetweb: 0x7f4d015c9000: 100.88.65.203 - - [23/Apr/2020:03:38:27 +0800] "GET /gamesafe/?location HTTP/1.1" 200 0 - -
2020-04-23 03:38:27.168928 7f4bac104700  1 civetweb: 0x7f4d015c9000: 100.88.65.203 - - [23/Apr/2020:03:38:27 +0800] "GET /gamesafe/model_config/cf_video/cfg/writing.cfg HTTP/1.1" 200 0 - -

Mac通过VirtualBox创建Ceph集群

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
VBoxManage hostonlyif remove vboxnet1
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet1 --ip 192.168.57.1 --netmask 255.255.255.0
VBoxManage createvm --name ceph-node1 --ostype RedHat_64 --register
VBoxManage modifyvm ceph-node1 --memory 1024 --nic1 nat --nic2 hostonly --hostonlyadapter2 vboxnet1
VBoxManage storagectl ceph-node1 --name "IDE Controller" --add ide --controller PIIX4 --hostiocache on --bootable on
VBoxManage storageattach ceph-node1 --storagectl "IDE Controller" --type dvddrive --port 0 --device 0 --medium CentOS-6.4-x86_64-bin-DVD1.iso
VBoxManage storagectl ceph-node1 --name "SATA Controller" --add sata --controller IntelAHCI --hostiocache on --bootable on
VBoxManage createhd --filename OS-ceph-node1.vdi --size 10240
VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 0 --device 0 --type hdd --medium OS-ceph-node1.vdi

VBoxManage createhd --filename ceph-node1-osd1.vdi --size 10240
VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 1 --device 0 --type hdd --medium ceph-node1-osd1.vdi
VBoxManage createhd --filename ceph-node1-osd2.vdi --size 10240
VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 2 --device 0 --type hdd --medium ceph-node1-osd2.vdi
VBoxManage createhd --filename ceph-node1-osd3.vdi --size 10240
VBoxManage storageattach ceph-node1 --storagectl "SATA Controller" --port 3 --device 0 --type hdd --medium ceph-node1-osd3.vdi

VBoxManage startvm ceph-node1 --type gui

# 最好能在本地 ssh 进入虚拟机的操作
echo 'HOSTNAME=ceph-node1' >> /etc/sysconfig/network
echo 'ONBOOT=yes\nBOOTPROTO=dhcp' >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo 'ONBOOT=yes\nBOOTPROTO=static\nIPADDR=192.168.57.101\nNETMASK=255.255.255.0' >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo '192.168.57.101 ceph-node1\n192.168.57.102 ceph-node2\n192.168.57.103 ceph-node3\n' >> /etc/hosts

VBoxManage clonevm --name ceph-node2 ceph-node1 --register
VBoxManage clonevm --name ceph-node3 ceph-node1 --register

VBoxManage startvm ceph-node1
VBoxManage startvm ceph-node2
VBoxManage startvm ceph-node3

echo 'HOSTNAME=ceph-node2' >> /etc/sysconfig/network
echo 'ONBOOT=yes\nBOOTPROTO=dhcp' >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo 'ONBOOT=yes\nBOOTPROTO=static\nIPADDR=192.168.57.102\nNETMASK=255.255.255.0' >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo '192.168.57.101 ceph-node1\n192.168.57.102 ceph-node2\n192.168.57.103 ceph-node3\n' >> /etc/hosts

echo 'HOSTNAME=ceph-node3' >> /etc/sysconfig/network
echo 'ONBOOT=yes\nBOOTPROTO=dhcp' >> /etc/sysconfig/network-scripts/ifcfg-eth0
echo 'ONBOOT=yes\nBOOTPROTO=static\nIPADDR=192.168.57.103\nNETMASK=255.255.255.0' >> /etc/sysconfig/network-scripts/ifcfg-eth1
echo '192.168.57.101 ceph-node1\n192.168.57.102 ceph-node2\n192.168.57.103 ceph-node3\n' >> /etc/hosts

# ssh 到虚拟机
ssh root@192.168.57.101 -p test

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

如果我有一块多余的硬盘

硬盘目前是 Windows NT File System NTFS,所以 MacOS 是不能写,只能读的(确定吗?)。

/ceph-radosgw-s3/image_1e6procg115ol4tecil1ovf17no9.png

如果要抹掉这个磁盘,可以参考下图。

/ceph-radosgw-s3/image_1e6prud92vni667cjf343g19m.png

抹掉之后,会重新在硬盘中创建文件系统。

/ceph-radosgw-s3/image_1e6ps0c1s1pelnbvr1r1f8i13ff13.png

如果要 Linux 虚拟机的磁盘,我猜是不能用 MacOS 相关的文件系统的,可能要选择 ext4。

/ceph-radosgw-s3/image_1e6ps587m9bqap1t4d1qb1lr32n.png /ceph-radosgw-s3/image_1e6ps7b4dqnp128qq88tp8g0l34.png

这里可以看到,开创建就有800M内容,可能是之前的。

/ceph-radosgw-s3/image_1e6ps8pipnau1u73ot61g1q1lm43h.png

所以我们可以在安全性里,把级别调高。

/ceph-radosgw-s3/image_1e6psabcj6731ssgl95fam1tj03u.png

可以看到,这次的抹掉过程中,有说到7次安全抹掉,这样的擦除方式,可以让磁盘的内容比较安全地删除,缺点就是擦的仔细,速度比较慢。

/ceph-radosgw-s3/image_1e6psdau412rec9r1dh3a8318r64o.png

Ceph Dashboard 的比较

如果你的 Ceph 版本在 luminous 以上,那么官方的 dashboard 绝对是最好的选择,但是万一你的集群是陈年老 Ceph,还有各种为了业务稳定的借口下的不升级,那么要搞个 dashboard,无疑是很头疼的

当然了,Ceph 的这些统计数据很多都是通过 ceph 本身的命令可视化出来的,自己写个 web 通过前端来展示也不是不可以,但是换句话来说,这么简单的需求,自己做起来也 duck 可不必。所以经过一番 github 之后,找到几个方案可以临时用一下,下面也简单做个对比

ceph-dash,也挺久没更新了,四个月前有个 README.md 的更新,按我的理解,如果 ceph 相关的命令不变,这个项目应该还是可以用的。怎么说呢,这个可视化确实也就只有 ceph health status 这些展示,内容比较少

/ceph-radosgw-s3/image_1ebq1j3mhhfbn931ai5ptj1bu29.png

官方 Ceph dashboard 后端是用 python 写的。

最新方案

luminous 可以开启 dashboard

rgw 通过 prometheus-nginxlog-exporter 可以解析 Nginx 日志,还有一些正则可以去处理,然后通过 relabel 的配置,让其在指标数据上打上标签。

civetweb 没有在日志记录请求返回的时间,所以考虑用 Nginx 来替换,为了能给 Ceph RGW 加一个请求相关的监控,所以需要去解析日志,在 prometheus-nginxlog-exporter 中,通过配置文件,可以把标签和端口等设置好,然后让 prometheus 加上这个 endpoint 就可以拉出来数据了。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
listen {
  port = 4040
}

enable_experimental = true

namespace "nginx" {
  source = {
    files = [
      "/var/log/nginx/access.log"
    ]
  }

  format = "$remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$request_time\""

  labels {
    app = "default"
  }

  #relabel "request" {
  #  from = "request"
  #}

  relabel "bucket" {
    from = "request"
    split = 2
    match "^/general__lingqu/.*" {
      replacement = "general__lingqu"
    }
  }
}

Nginx 日志。

1
2
3
xx.xx.xxx.x - - [26/Jun/2020:01:23:37 +0800] "GET /general__lingqu/kp/out/390280/n1590907415729/107916/20200601220441/_SUCCESS HTTP/1.1" 200 0 "0.001"
xx.xx.xxx.x - - [26/Jun/2020:01:23:43 +0800] "GET /general__lingqu/kp/out/390280/n1590907415729/107916/20200601220441/part-00000 HTTP/1.1" 200 1165212977 "6.006"
xx.xx.xxx.x - - [26/Jun/2020:01:23:49 +0800] "GET /general__lingqu/kp/out/390280/n1590907415729/107916/20200601220441/part-00001 HTTP/1.1" 200 1180678766 "6.130"

prometheus-nginxlog-exporter 采集到的指标如下:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# HELP nginx_http_response_count_total Amount of processed HTTP requests
# TYPE nginx_http_response_count_total counter
nginx_http_response_count_total{app="default",bucket="",method="GET",status="200"} 1
nginx_http_response_count_total{app="default",bucket="general__lingqu",method="DELETE",status="204"} 2
nginx_http_response_count_total{app="default",bucket="general__lingqu",method="GET",status="200"} 214
nginx_http_response_count_total{app="default",bucket="general__lingqu",method="HEAD",status="200"} 8474
nginx_http_response_count_total{app="default",bucket="general__lingqu",method="PUT",status="200"} 5
# HELP nginx_http_response_size_bytes Total amount of transferred bytes
# TYPE nginx_http_response_size_bytes counter
nginx_http_response_size_bytes{app="default",bucket="",method="GET",status="200"} 338
nginx_http_response_size_bytes{app="default",bucket="general__lingqu",method="DELETE",status="204"} 0
nginx_http_response_size_bytes{app="default",bucket="general__lingqu",method="GET",status="200"} 2.1549611919e+10
nginx_http_response_size_bytes{app="default",bucket="general__lingqu",method="HEAD",status="200"} 0
nginx_http_response_size_bytes{app="default",bucket="general__lingqu",method="PUT",status="200"} 0
# HELP nginx_http_response_time_seconds Time needed by NGINX to handle requests
# TYPE nginx_http_response_time_seconds summary
nginx_http_response_time_seconds{app="default",bucket="",method="GET",status="200",quantile="0.5"} NaN
nginx_http_response_time_seconds{app="default",bucket="",method="GET",status="200",quantile="0.9"} NaN
nginx_http_response_time_seconds{app="default",bucket="",method="GET",status="200",quantile="0.99"} NaN
nginx_http_response_time_seconds_sum{app="default",bucket="",method="GET",status="200"} 0.002
nginx_http_response_time_seconds_count{app="default",bucket="",method="GET",status="200"} 1
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="DELETE",status="204",quantile="0.5"} 0.002
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="DELETE",status="204",quantile="0.9"} 0.009
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="DELETE",status="204",quantile="0.99"} 0.009
nginx_http_response_time_seconds_sum{app="default",bucket="general__lingqu",method="DELETE",status="204"} 0.011
nginx_http_response_time_seconds_count{app="default",bucket="general__lingqu",method="DELETE",status="204"} 2
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="GET",status="200",quantile="0.5"} 0.014
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="GET",status="200",quantile="0.9"} 0.112
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="GET",status="200",quantile="0.99"} 7.037
nginx_http_response_time_seconds_sum{app="default",bucket="general__lingqu",method="GET",status="200"} 115.89000000000006
nginx_http_response_time_seconds_count{app="default",bucket="general__lingqu",method="GET",status="200"} 214
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="HEAD",status="200",quantile="0.5"} 0.002
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="HEAD",status="200",quantile="0.9"} 0.002
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="HEAD",status="200",quantile="0.99"} 0.024
nginx_http_response_time_seconds_sum{app="default",bucket="general__lingqu",method="HEAD",status="200"} 22.99799999999861
nginx_http_response_time_seconds_count{app="default",bucket="general__lingqu",method="HEAD",status="200"} 8474
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="PUT",status="200",quantile="0.5"} NaN
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="PUT",status="200",quantile="0.9"} NaN
nginx_http_response_time_seconds{app="default",bucket="general__lingqu",method="PUT",status="200",quantile="0.99"} NaN
nginx_http_response_time_seconds_sum{app="default",bucket="general__lingqu",method="PUT",status="200"} 0.839
nginx_http_response_time_seconds_count{app="default",bucket="general__lingqu",method="PUT",status="200"} 5

Ceph监控

目前 RGW 是通过通过默认的 civetweb 部署的,因为 civetweb 产生的日志比较简单,无法判断请求的成功/失败/时长等等信息,查了一下 civetweb 相关的配置,暂时没找到可以打出更多信息的配置方法,为了统计用户在读写 Bucket 时候的时延等指标,所以考虑用 Nginx 做一层转发,通过解析 access.log/error.log 来获取时序指标存到 Prometheus,最后通过 Grafana 展示。

下面是生产环境 civetweb 的日志。

/ceph-radosgw-s3/image_1e9pu10t5gv415ke1ngu55s1l2hp.png

确定需要收集的指标

Nginx 里可以配置日志的格式,Telegraf 的配置。

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[global_tags]
  cluster_name = "Cluster01"
  host_name = "node-1"
  host_ip = "192.168.1.5"

[agent]
  interval = "5s"

[[inputs.logparser]]
  files = ["/var/log/nginx/access.log"]
  from_beginning = false
  [inputs.logparser.grok]
    patterns = ['%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"%{NOTSPACE:http_method} %{NOTSPACE:uri} %{NOTSPACE:http_version}\" %{NUMBER:status:int} %{NUMBER:request_length:int} %{NUMBER:body_bytes_sent:int} \"%{NOTSPACE:http_referer}\" %{QUOTEDSTRING:http_user_agent} \"%{NOTSPACE:http_x_forwarded_for}\" \"%{NUMBER:request_time:float}\"']
    measurement = "nginx_access_log"

[[inputs.logparser]]
  files = ["/var/log/nginx/access.log"]
  from_beginning = false
  [inputs.logparser.grok]
    custom_patterns = '''
    SWIFT_API_PREFIX swift\/v1\/
    CEPH_BUCKET [^\/]+
    '''
    patterns = ['%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"GET \/%{SWIFT_API_PREFIX🪣tag}%{CEPH_BUCKET🪣tag}\/%{NOTSPACE:file}.* %{NOTSPACE:http_version}\" %{NUMBER:status:int} %{NUMBER:request_length:int} %{NUMBER:body_bytes_sent:int} \"%{NOTSPACE:http_referer}\" %{QUOTEDSTRING:http_user_agent} \"%{NOTSPACE:http_x_forwarded_for}\" \"%{NUMBER:request_time:float}\"']
    measurement = "nginx_download_log"

[[inputs.logparser]]
  files = ["/var/log/nginx/access.log"]
  from_beginning = false
  [inputs.logparser.grok]
    custom_patterns = '''
    SWIFT_API_PREFIX swift\/v1\/
    CEPH_BUCKET [^\/]+
    '''
    patterns = ['%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"PUT \/%{SWIFT_API_PREFIX🪣tag}%{CEPH_BUCKET🪣tag}\/%{NOTSPACE:file} %{NOTSPACE:http_version}\" %{NUMBER:status:int} %{NUMBER:request_length:int} %{NUMBER:body_bytes_sent:int} \"%{NOTSPACE:http_referer}\" %{QUOTEDSTRING:http_user_agent} \"%{NOTSPACE:http_x_forwarded_for}\" \"%{NUMBER:request_time:float}\"']
    measurement = "nginx_upload_log"

[[inputs.logparser]]
  files = ["/var/log/nginx/access.log"]
  from_beginning = false
  [inputs.logparser.grok]
    custom_patterns = '''
    SWIFT_API_PREFIX swift\/v1\/
    CEPH_BUCKET [^\/]+
    '''
    patterns = ['%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"DELETE \/%{SWIFT_API_PREFIX🪣tag}%{CEPH_BUCKET🪣tag}\/%{NOTSPACE:file} %{NOTSPACE:http_version}\" %{NUMBER:status:int} %{NUMBER:request_length:int} %{NUMBER:body_bytes_sent:int} \"%{NOTSPACE:http_referer}\" %{QUOTEDSTRING:http_user_agent} \"%{NOTSPACE:http_x_forwarded_for}\" \"%{NUMBER:request_time:float}\"']
    measurement = "nginx_delete_log"
1
%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"%{NOTSPACE:http_method} %{NOTSPACE:uri} %{NOTSPACE:http_version}\" %{BASE10NUM:status} %{BASE10NUM:request_length} %{NOTSPACE:body_bytes_sent}

收集到指标的转换值。

radosgw-admin user info –uid=lingqu

nginx.conf 配置

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
# user nginx;
worker_processes 1;
# worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for" "$request_time" "$upstream_response_time" ';
    # runzhliu
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $request_length $body_bytes_sent '
                      '"$request_time" ';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/*.conf;

    server {
        listen       9091 default_server;
        listen       [::]:9091 default_server;
        server_name  127.0.0.1;
        root         /usr/share/nginx/html;

        # Load configuration files for the default server block.
        include /etc/nginx/default.d/*.conf;

        location / {
            proxy_pass http://127.0.0.1:7480;
        }

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }
}

telegraf.conf 配置。支持环境变量

/ceph-radosgw-s3/image_1e9t08q4qe10i63199qdat1jup16.png
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
[global_tags]
  cluster_name = "Cluster01"
  host_name = "node-1"
  host_ip = "192.168.1.5"

[agent]
  interval = "5s"

[[inputs.logparser]]
  files = ["/var/log/nginx/access.log"]
  from_beginning = true
  [inputs.logparser.grok]
    patterns = ['%{NOTSPACE:remote_addr} - %{NOTSPACE:remote_user} \[%{NOTSPACE:timestamp} %{NOTSPACE:time_zone}\] \"%{NOTSPACE:http_method} %{NOTSPACE:uri} %{NOTSPACE:http_version}\" %{NUMBER:status} %{NUMBER:request_length} %{NUMBER:body_bytes_sent} \"%{NUMBER:request_time:float}\"']
    measurement = "nginx_access_log"

[[outputs.prometheus_client]]
  ## Address to listen on.
  listen = ":9273"
1
2
3
4
nginx_access_log,cluster_name=Cluster01,host=TENCENT64site,host_ip=192.168.1.5,host_name=node-1,path=/var/log/nginx/access.log remote_user="-",request_time=0.002,time_zone="+0800",http_version="HTTP/1.1",remote_addr="9.2.137.146",status="404",body_bytes_sent="213",http_method="GET",request_length="175",uri="/abc",timestamp="03/Jun/2020:15:18:50" 1591168730725367824
nginx_access_log,cluster_name=Cluster01,host=TENCENT64site,host_ip=192.168.1.5,host_name=node-1,path=/var/log/nginx/access.log remote_addr="9.2.137.146",time_zone="+0800",http_method="GET",http_version="HTTP/1.1",uri="/abc",body_bytes_sent="213",request_length="175",remote_user="-",request_time=0.002,timestamp="03/Jun/2020:15:17:23",status="404" 1591168728894147242
nginx_access_log,cluster_name=Cluster01,host=TENCENT64site,host_ip=192.168.1.5,host_name=node-1,path=/var/log/nginx/access.log remote_addr="9.2.137.146",remote_user="-",time_zone="+0800",http_method="GET",request_length="175",timestamp="03/Jun/2020:15:16:14",body_bytes_sent="213",request_time=0.001,status="404",http_version="HTTP/1.1",uri="/abc" 1591168728894126502
nginx_access_log,cluster_name=Cluster01,host=TENCENT64site,host_ip=192.168.1.5,host_name=node-1,path=/var/log/nginx/access.log http_version="HTTP/1.1",timestamp="03/Jun/2020:15:15:54",body_bytes_sent="213",remote_user="-",request_time=0.002,status="404",http_method="GET",request_length="175",time_zone="+0800",uri="/abc",remote_addr="9.2.137.146" 1591168728894079274

参考资料

警告
本文最后更新于 2017年2月1日,文中内容可能已过时,请谨慎参考。