SAIO (Swift All In One)¶
注意
本指南假定您已有一个 Linux 服务器。物理机或虚拟机均可。我们建议配置至少 2GB 内存和 40GB 存储空间。我们建议使用虚拟机,以便将 Swift 及其依赖项与其他您可能正在使用的项目隔离。
设置开发虚拟机说明¶
本节文档介绍了设置用于 Swift 开发的虚拟机。虚拟机将模拟运行一个四节点 Swift 集群。开始
获取 Linux 系统服务器镜像,本指南将涵盖
Ubuntu 24.04 LTS
CentOS Stream 9
Fedora
OpenSuse
从镜像创建虚拟机。
<your-user-name> 中包含的内容¶
本指南中描述的大部分配置需要提升的管理员 (root) 权限;但是,我们假设管理员以非特权用户身份登录,并可以使用 sudo 运行特权命令。
Swift 进程也将在单独的用户和组下运行,由配置选项设置,并引用为 <your-user-name>:<your-group-name>。默认用户是 swift,该用户可能不存在于您的系统中。这些说明旨在允许开发人员将其用户名用于 <your-user-name>:<your-group-name>。
注意
对于 OpenSuse 用户,用户的基本组是 users,因此您有两种选择
将
${USER}:${USER}更改为${USER}:users在本指南的所有引用中;或者创建一个用于您的用户名的组,并将您自己添加到其中
sudo groupadd ${USER} && sudo gpasswd -a ${USER} ${USER} && newgrp ${USER}
安装依赖项¶
在基于
apt的系统上sudo apt-get update sudo apt-get install curl gcc memcached rsync sqlite3 xfsprogs \ git-core libffi-dev python3-setuptools \ liberasurecode-dev libssl-dev sudo apt-get install python3-coverage python3-dev python3-pytest \ python3-xattr python3-eventlet \ python3-greenlet python3-pastedeploy \ python3-pip python3-dnspython
在
CentOS上(需要额外的仓库)sudo dnf update sudo dnf install epel-release sudo dnf config-manager --enable epel extras sudo dnf install centos-release-openstack-epoxy sudo dnf install curl gcc memcached rsync-daemon sqlite xfsprogs git-core \ libffi-devel liberasurecode-devel \ openssl-devel python3-setuptools \ python3-coverage python3-devel python3-pytest \ python3-pyxattr python3-eventlet \ python3-greenlet python3-paste-deploy \ python3-pip python3-dns
在
Fedora上sudo dnf update sudo dnf install curl gcc memcached rsync-daemon sqlite xfsprogs git-core \ libffi-devel liberasurecode-devel python3-pyeclib \ openssl-devel python3-setuptools \ python3-coverage python3-devel python3-pytest \ python3-pyxattr python3-eventlet \ python3-greenlet python3-paste-deploy \ python3-pip python3-dns
在
OpenSuse上sudo zypper install curl gcc memcached rsync sqlite3 xfsprogs git-core \ libffi-devel liberasurecode-devel python3-setuptools \ libopenssl-devel sudo zypper install python3-coverage python3-devel python3-nose \ python3-xattr python3-eventlet python3-greenlet \ python3-pip python3-dnspython
注意
这将安装必要的系统依赖项和大部分 Python 依赖项。稍后在过程中,setuptools/distribute 或 pip 将安装和/或升级软件包。
配置存储¶
Swift 需要 XFS 文件系统上的某些空间来存储数据和运行测试。
选择 使用分区进行存储 或 使用环回设备进行存储。
使用分区进行存储¶
如果您将使用单独的分区来存储 Swift 数据,请确保在创建虚拟机时添加另一个设备,并遵循这些说明
注意
磁盘不必是 /dev/sdb1(例如,它可以是 /dev/vdb1),但是挂载点仍然应该是 /mnt/sdb1。
在设备上设置单个分区(这将擦除驱动器)
sudo parted /dev/sdb mklabel msdos mkpart p xfs 0% 100%
在分区上创建 XFS 文件系统
sudo mkfs.xfs /dev/sdb1
查找新分区的 UUID
sudo blkid
编辑
/etc/fstab并添加UUID="<UUID-from-output-above>" /mnt/sdb1 xfs noatime 0 0
创建 Swift 数据挂载点并测试挂载是否有效
sudo mkdir /mnt/sdb1 sudo mount -a
接下来,跳到 通用设备设置后。
使用环回设备进行存储¶
如果您想使用环回设备而不是另一个分区,请遵循这些说明
创建环回设备的的文件
sudo mkdir -p /srv sudo truncate -s 1GB /srv/swift-disk sudo mkfs.xfs /srv/swift-disk
修改
truncate命令中指定的大小,以创建更大或更小的分区,如果需要。编辑 /etc/fstab 并添加
/srv/swift-disk /mnt/sdb1 xfs loop,noatime 0 0
创建 Swift 数据挂载点并测试挂载是否有效
sudo mkdir /mnt/sdb1 sudo mount -a
通用设备设置后¶
创建个性化的数据链接
sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* for x in {1..4}; do sudo ln -s /mnt/sdb1/$x /srv/$x; done sudo mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 sudo mkdir -p /var/run/swift sudo mkdir -p /var/cache/swift /var/cache/swift2 \ /var/cache/swift3 /var/cache/swift4 sudo chown -R ${USER}:${USER} /var/run/swift sudo chown -R ${USER}:${USER} /var/cache/swift* # **Make sure to include the trailing slash after /srv/$x/** for x in {1..4}; do sudo chown -R ${USER}:${USER} /srv/$x/; done注意
我们在 /mnt/sdb1 下创建挂载点并挂载环回文件。该文件将包含每个模拟 Swift 节点的一个目录,每个目录都由当前 Swift 用户拥有。
然后,我们在 /srv 下创建指向这些目录的符号链接。如果 sdb 磁盘或环回文件未挂载,文件将不会写入 /srv/*,因为符号链接目标 /mnt/sdb1/* 不存在。这可以防止磁盘同步操作在驱动器卸载时写入根分区。
恢复重启后的适当权限。
在传统的 Linux 系统上,将以下行添加到
/etc/rc.local(在exit 0之前)mkdir -p /var/cache/swift /var/cache/swift2 /var/cache/swift3 /var/cache/swift4 chown <your-user-name>:<your-group-name> /var/cache/swift* mkdir -p /var/run/swift chown <your-user-name>:<your-group-name> /var/run/swift
在 CentOS 和 Fedora 上,我们可以使用 systemd(rc.local 已弃用)
cat << EOF |sudo tee /etc/tmpfiles.d/swift.conf d /var/cache/swift 0755 ${USER} ${USER} - - d /var/cache/swift2 0755 ${USER} ${USER} - - d /var/cache/swift3 0755 ${USER} ${USER} - - d /var/cache/swift4 0755 ${USER} ${USER} - - d /var/run/swift 0755 ${USER} ${USER} - - EOF在 OpenSuse 上,无需在此处进行任何操作。
注意
在某些系统上,rc 文件可能需要是一个可执行的 shell 脚本。
创建 XFS tmp 目录¶
测试需要 XFS 文件系统上可用的目录。默认情况下,测试使用 /tmp,但是可以通过 TMPDIR 环境变量将其指向其他位置。
注意
如果您的根文件系统是 XFS,您可以跳过本节,如果 /tmp 只是一个目录,而不是挂载的 tmpfs。或者,您可以简单地将其指向您拥有的任何现有目录,通过使用 TMPDIR 环境变量来指定它。
如果您的根文件系统不是 XFS,您应该创建一个环回设备,使用 XFS 格式化它并挂载它。您可以将其挂载到 /tmp 或另一个位置,并使用 TMPDIR 环境变量指定它。
创建 tmp 环回设备的的文件
sudo mkdir -p /srv sudo truncate -s 1GB /srv/swift-tmp # create 1GB file for XFS in /srv sudo mkfs.xfs /srv/swift-tmp
要将 tmp 环回设备挂载到
/tmp,请执行以下操作sudo mount -o loop,noatime /srv/swift-tmp /tmp sudo chmod -R 1777 /tmp
要持久化此操作,请编辑并添加到
/etc/fstab/srv/swift-tmp /tmp xfs rw,noatime,attr2,inode64,noquota 0 0
要将 tmp 环回设备挂载到另一个位置(例如,
/mnt/tmp),请执行以下操作sudo mkdir -p /mnt/tmp sudo mount -o loop,noatime /srv/swift-tmp /mnt/tmp sudo chown ${USER}:${USER} /mnt/tmp要持久化此操作,请编辑并添加到
/etc/fstab/srv/swift-tmp /mnt/tmp xfs rw,noatime,attr2,inode64,noquota 0 0
设置您的
TMPDIR环境变量,以便 Swift 查找正确的位置export TMPDIR=/mnt/tmp echo "export TMPDIR=/mnt/tmp" >> $HOME/.bashrc
获取代码¶
检出 python-swiftclient 仓库
cd $HOME; git clone https://opendev.org/openstack/python-swiftclient.git
构建 python-swiftclient 的开发安装
cd $HOME/python-swiftclient; sudo python3 setup.py develop; cd -
检出 Swift 仓库
git clone https://github.com/openstack/swift.git
构建 Swift 的开发安装
cd $HOME/swift; sudo pip install --no-binary cryptography -r requirements.txt; sudo python setup.py develop; cd -
注意
由于 OpenSuse 与其他 Linux 发行版在
libssl.so的命名方式上存在差异,因此 wheel/binary 将不起作用;因此,我们使用--no-binary cryptography在本地构建cryptography。Fedora 用户如果 Swift 的开发安装失败,可能需要执行以下操作
sudo pip install -U xattr
安装 Swift 的测试依赖项
cd $HOME/swift; sudo pip install -r test-requirements.txt
设置 rsync¶
创建
/etc/rsyncd.confsudo cp $HOME/swift/doc/saio/rsyncd.conf /etc/ sudo sed -i "s/<your-user-name>/${USER}/" /etc/rsyncd.conf以下是存储库中维护的默认
rsyncd.conf文件内容,已复制并进行了修复uid = <your-user-name> gid = <your-user-name> log file = /var/log/rsyncd.log pid file = /var/run/rsyncd.pid address = 0.0.0.0 [account6212] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/account6212.lock [account6222] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/account6222.lock [account6232] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/account6232.lock [account6242] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/account6242.lock [container6211] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/container6211.lock [container6221] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/container6221.lock [container6231] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/container6231.lock [container6241] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/container6241.lock [object6210] max connections = 25 path = /srv/1/node/ read only = false lock file = /var/lock/object6210.lock [object6220] max connections = 25 path = /srv/2/node/ read only = false lock file = /var/lock/object6220.lock [object6230] max connections = 25 path = /srv/3/node/ read only = false lock file = /var/lock/object6230.lock [object6240] max connections = 25 path = /srv/4/node/ read only = false lock file = /var/lock/object6240.lock
启用 rsync 守护程序
在 Ubuntu 上,编辑
/etc/default/rsync中的以下行RSYNC_ENABLE=true
注意
您可能需要创建文件才能执行编辑。
在 CentOS 和 Fedora 上,启用 systemd 服务
sudo systemctl enable rsyncd
在 OpenSuse 上,无需在此处进行任何操作。
在具有 SELinux 处于
Enforcing模式的平台上,将其设置为Permissivesudo setenforce Permissive sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
或者只是允许 rsync 完全访问
sudo setsebool -P rsync_full_access 1
启动 rsync 守护程序
在 Ubuntu 14.04 上,运行
sudo service rsync restart
在 Ubuntu 16.04 上,运行
sudo systemctl enable rsync sudo systemctl start rsync
在 CentOS、Fedora 和 OpenSuse 上,运行
sudo systemctl start rsyncd
在其他基于 xinetd 的系统上,只需运行
sudo service xinetd restart
验证 rsync 是否正在接受所有服务器的连接
rsync rsync://pub@localhost/
您应该从上面的命令看到以下输出
account6212 account6222 account6232 account6242 container6211 container6221 container6231 container6241 object6210 object6220 object6230 object6240
启动 memcached¶
在非 Ubuntu 发行版上,您需要确保 memcached 正在运行
sudo service memcached start
sudo chkconfig memcached on
或者
sudo systemctl enable memcached
sudo systemctl start memcached
tempauth 中间件在 memcached 中存储令牌。如果 memcached 未运行,则无法验证令牌,并且访问 Swift 变得不可能。
可选:为单独的日志记录设置 rsyslog¶
Fedora 和 OpenSuse 可能未安装 rsyslog,在这种情况下,如果您想使用单独的日志记录,则需要安装它。
安装 rsyslogd
在 Fedora 上
sudo dnf install rsyslog
在 OpenSuse 上
sudo zypper install rsyslog
安装 Swift rsyslogd 配置
sudo cp $HOME/swift/doc/saio/rsyslog.d/10-swift.conf /etc/rsyslog.d/
请务必查看该配置文件,以确定您是希望所有日志都在一个文件中,还是所有日志都分开,以及您是否希望进行统计处理的每小时日志。为了方便起见,我们下面提供了其默认内容
# Uncomment the following to have a log containing all logs together #local1,local2,local3,local4,local5.* /var/log/swift/all.log # Uncomment the following to have hourly proxy logs for stats processing #$template HourlyProxyLog,"/var/log/swift/hourly/%$YEAR%%$MONTH%%$DAY%%$HOUR%" #local1.*;local1.!notice ?HourlyProxyLog local1.*;local1.!notice /var/log/swift/proxy.log local1.notice /var/log/swift/proxy.error local1.* ~ local2.*;local2.!notice /var/log/swift/storage1.log local2.notice /var/log/swift/storage1.error local2.* ~ local3.*;local3.!notice /var/log/swift/storage2.log local3.notice /var/log/swift/storage2.error local3.* ~ local4.*;local4.!notice /var/log/swift/storage3.log local4.notice /var/log/swift/storage3.error local4.* ~ local5.*;local5.!notice /var/log/swift/storage4.log local5.notice /var/log/swift/storage4.error local5.* ~ local6.*;local6.!notice /var/log/swift/expirer.log local6.notice /var/log/swift/expirer.error local6.* ~
编辑
/etc/rsyslog.conf并进行以下更改(通常在“GLOBAL DIRECTIVES”部分中)$PrivDropToGroup adm
如果使用每小时日志(见上文),请执行
sudo mkdir -p /var/log/swift/hourly
否则执行
sudo mkdir -p /var/log/swift
设置日志目录并启动 syslog
在 Ubuntu 上
sudo chown -R syslog.adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo service rsyslog restart
在 CentOS、Fedora 和 OpenSuse 上
sudo chown -R root:adm /var/log/swift sudo chmod -R g+w /var/log/swift sudo systemctl restart rsyslog sudo systemctl enable rsyslog
配置每个节点¶
执行以下步骤后,请确保 Swift 可以访问生成的配置文件(提供了所有默认值的逐行注释的示例配置文件)。
可选地删除现有的 swift 目录
sudo rm -rf /etc/swift
填充
/etc/swift目录本身cd $HOME/swift/doc; sudo cp -r saio/swift /etc/swift; cd - sudo chown -R ${USER}:${USER} /etc/swift更新 Swift 配置文件中的
<your-user-name>引用find /etc/swift/ -name \*.conf | xargs sudo sed -i "s/<your-user-name>/${USER}/"
通过执行上述命令提供的配置文件的内容如下
/etc/swift/swift.conf[swift-hash] # random unique strings that can never change (DO NOT LOSE) # Use only printable chars (python -c "import string; print(string.printable)") swift_hash_path_prefix = changeme swift_hash_path_suffix = changeme [storage-policy:0] name = gold policy_type = replication default = yes [storage-policy:1] name = silver policy_type = replication [storage-policy:2] name = ec42 policy_type = erasure_coding ec_type = liberasurecode_rs_vand ec_num_data_fragments = 4 ec_num_parity_fragments = 2
/etc/swift/proxy-server.conf[DEFAULT] bind_ip = 127.0.0.1 bind_port = 8080 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL1 eventlet_debug = true [pipeline:main] # Yes, proxy-logging appears twice. This is so that # middleware-originated requests get logged too. pipeline = catch_errors gatekeeper healthcheck proxy-logging cache etag-quoter listing_formats bulk tempurl ratelimit crossdomain container_sync tempauth staticweb copy container-quotas account-quotas slo dlo versioned_writes symlink proxy-logging proxy-server [filter:catch_errors] use = egg:swift#catch_errors [filter:healthcheck] use = egg:swift#healthcheck [filter:proxy-logging] use = egg:swift#proxy_logging [filter:bulk] use = egg:swift#bulk [filter:ratelimit] use = egg:swift#ratelimit [filter:crossdomain] use = egg:swift#crossdomain [filter:dlo] use = egg:swift#dlo [filter:slo] use = egg:swift#slo [filter:container_sync] use = egg:swift#container_sync current = //saio/saio_endpoint [filter:tempurl] use = egg:swift#tempurl [filter:tempauth] use = egg:swift#tempauth user_admin_admin = admin .admin .reseller_admin user_test_tester = testing .admin user_test_tester2 = testing2 .admin user_test_tester3 = testing3 user_test2_tester2 = testing2 .admin [filter:staticweb] use = egg:swift#staticweb [filter:account-quotas] use = egg:swift#account_quotas [filter:container-quotas] use = egg:swift#container_quotas [filter:cache] use = egg:swift#memcache [filter:etag-quoter] use = egg:swift#etag_quoter enable_by_default = false [filter:gatekeeper] use = egg:swift#gatekeeper [filter:versioned_writes] use = egg:swift#versioned_writes allow_versioned_writes = true allow_object_versioning = true [filter:copy] use = egg:swift#copy [filter:listing_formats] use = egg:swift#listing_formats [filter:domain_remap] use = egg:swift#domain_remap [filter:symlink] use = egg:swift#symlink # To enable, add the s3api middleware to the pipeline before tempauth [filter:s3api] use = egg:swift#s3api s3_acl = yes check_bucket_owner = yes cors_preflight_allow_origin = * # Example to create root secret: `openssl rand -base64 32` [filter:keymaster] use = egg:swift#keymaster encryption_root_secret = changeme/changeme/changeme/changeme/change/= # To enable use of encryption add both middlewares to pipeline, example: # <other middleware> keymaster encryption proxy-logging proxy-server [filter:encryption] use = egg:swift#encryption [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true
/etc/swift/object-expirer.conf[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: log_name = object-expirer log_facility = LOG_LOCAL6 log_level = INFO #log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [object-expirer] interval = 300 # report_interval = 300 # concurrency is the level of concurrency to use to do the work, this value # must be set to at least 1 # concurrency = 1 # processes is how many parts to divide the work into, one part per process # that will be doing the work # processes set 0 means that a single process will be doing all the work # processes can also be specified on the command line and will override the # config value # processes = 0 # process is which of the parts a particular process will work on # process can also be specified on the command line and will override the config # value # process is "zero based", if you want to use 3 processes, you should run # processes with process set to 0, 1, and 2 # process = 0 [pipeline:main] pipeline = catch_errors cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/container-sync-realms.conf[saio] key = changeme key2 = changeme cluster_saio_endpoint = http://127.0.0.1:8080/v1/
/etc/swift/account-server/1.conf[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6212 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/1.conf[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6211 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/container-reconciler/1.conf[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: # log_name = swift log_facility = LOG_LOCAL2 # log_level = INFO # log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [container-reconciler] # reclaim_age = 604800 # interval = 300 # request_tries = 3 processes = 4 process = 0 [pipeline:main] pipeline = catch_errors proxy-logging cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:proxy-logging] use = egg:swift#proxy_logging [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/object-server/1.conf[DEFAULT] devices = /srv/1/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.1 bind_port = 6210 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL2 recon_cache_path = /var/cache/swift eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor] [object-relinker]
/etc/swift/account-server/2.conf[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6222 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/2.conf[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6221 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/container-reconciler/2.conf[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: # log_name = swift log_facility = LOG_LOCAL3 # log_level = INFO # log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [container-reconciler] # reclaim_age = 604800 # interval = 300 # request_tries = 3 processes = 4 process = 1 [pipeline:main] pipeline = catch_errors proxy-logging cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:proxy-logging] use = egg:swift#proxy_logging [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/object-server/2.conf[DEFAULT] devices = /srv/2/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.2 bind_port = 6220 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL3 recon_cache_path = /var/cache/swift2 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor] [object-relinker]
/etc/swift/account-server/3.conf[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6232 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/3.conf[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6231 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/container-reconciler/3.conf[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: # log_name = swift log_facility = LOG_LOCAL4 # log_level = INFO # log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [container-reconciler] # reclaim_age = 604800 # interval = 300 # request_tries = 3 processes = 4 process = 2 [pipeline:main] pipeline = catch_errors proxy-logging cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:proxy-logging] use = egg:swift#proxy_logging [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/object-server/3.conf[DEFAULT] devices = /srv/3/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.3 bind_port = 6230 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL4 recon_cache_path = /var/cache/swift3 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor] [object-relinker]
/etc/swift/account-server/4.conf[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6242 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon account-server [app:account-server] use = egg:swift#account [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [account-replicator] rsync_module = {replication_ip}::account{replication_port} [account-auditor] [account-reaper]
/etc/swift/container-server/4.conf[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6241 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon container-server [app:container-server] use = egg:swift#container [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [container-replicator] rsync_module = {replication_ip}::container{replication_port} [container-updater] [container-auditor] [container-sync] [container-sharder] auto_shard = true rsync_module = {replication_ip}::container{replication_port} # This is intentionally much smaller than the default of 1,000,000 so tests # can run in a reasonable amount of time shard_container_threshold = 100 # The probe tests make explicit assumptions about the batch sizes shard_scanner_batch_size = 10 cleave_batch_size = 2
/etc/swift/container-reconciler/4.conf[DEFAULT] # swift_dir = /etc/swift user = <your-user-name> # You can specify default log routing here if you want: # log_name = swift log_facility = LOG_LOCAL5 # log_level = INFO # log_address = /dev/log # # comma separated list of functions to call to setup custom log handlers. # functions get passed: conf, name, log_to_console, log_route, fmt, logger, # adapted_logger # log_custom_handlers = # # If set, log_udp_host will override log_address # log_udp_host = # log_udp_port = 514 # # You can enable StatsD logging here: # log_statsd_host = # log_statsd_port = 8125 # log_statsd_default_sample_rate = 1.0 # log_statsd_sample_rate_factor = 1.0 # log_statsd_metric_prefix = [container-reconciler] # reclaim_age = 604800 # interval = 300 # request_tries = 3 processes = 4 process = 3 [pipeline:main] pipeline = catch_errors proxy-logging cache proxy-server [app:proxy-server] use = egg:swift#proxy # See proxy-server.conf-sample for options [filter:cache] use = egg:swift#memcache # See proxy-server.conf-sample for options [filter:proxy-logging] use = egg:swift#proxy_logging [filter:catch_errors] use = egg:swift#catch_errors # See proxy-server.conf-sample for options
/etc/swift/object-server/4.conf[DEFAULT] devices = /srv/4/node mount_check = false disable_fallocate = true bind_ip = 127.0.0.4 bind_port = 6240 workers = 1 user = <your-user-name> log_facility = LOG_LOCAL5 recon_cache_path = /var/cache/swift4 eventlet_debug = true [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object [filter:recon] use = egg:swift#recon [filter:healthcheck] use = egg:swift#healthcheck [object-replicator] rsync_module = {replication_ip}::object{replication_port} [object-reconstructor] [object-updater] [object-auditor] [object-relinker]
设置运行 Swift 的脚本¶
复制 SAIO 脚本以重置环境
mkdir -p $HOME/bin cd $HOME/swift/doc; cp saio/bin/* $HOME/bin; cd - chmod +x $HOME/bin/*
编辑
$HOME/bin/resetswift脚本提供的
resetswift脚本模板如下#!/bin/bash set -e swift-init all kill swift-orphans -a 0 -k KILL # Remove the following line if you did not set up rsyslog for individual logging: sudo find /var/log/swift -type f -exec rm -f {} \; if cut -d' ' -f2 /proc/mounts | grep -q /mnt/sdb1 ; then sudo umount /mnt/sdb1 fi # If you are using a loopback device set SAIO_BLOCK_DEVICE to "/srv/swift-disk" sudo mkfs.xfs -f ${SAIO_BLOCK_DEVICE:-/dev/sdb1} sudo mount /mnt/sdb1 sudo mkdir /mnt/sdb1/1 /mnt/sdb1/2 /mnt/sdb1/3 /mnt/sdb1/4 sudo chown ${USER}:${USER} /mnt/sdb1/* mkdir -p /srv/1/node/sdb1 /srv/1/node/sdb5 \ /srv/2/node/sdb2 /srv/2/node/sdb6 \ /srv/3/node/sdb3 /srv/3/node/sdb7 \ /srv/4/node/sdb4 /srv/4/node/sdb8 sudo rm -f /var/log/debug /var/log/messages /var/log/rsyncd.log /var/log/syslog find /var/cache/swift* -type f -name *.recon -exec rm -f {} \; if [ "`type -t systemctl`" == "file" ]; then sudo systemctl restart rsyslog sudo systemctl restart memcached else sudo service rsyslog restart sudo service memcached restart fi
如果您未设置 rsyslog 进行单独的日志记录,请删除
find /var/log/swift...行sed -i "/find \/var\/log\/swift/d" $HOME/bin/resetswift
安装运行测试的示例配置文件
cp $HOME/swift/test/sample.conf /etc/swift/test.conf
提供的
test.conf模板如下[s3api_test] endpoint = http://127.0.0.1:8080 #ca_cert=/path/to/ca.crt region = us-east-1 # First and second users should be account owners access_key1 = test:tester secret_key1 = testing access_key2 = test:tester2 secret_key2 = testing2 # Third user should be unprivileged access_key3 = test:tester3 secret_key3 = testing3 # Some tests require advanced compatibility features to pass. Add the # following non-default options to the s3api section of your proxy-server.conf # s3_acl = True # check_bucket_owner = True # Alternatively, skip those tests by setting this option to True s3_acl_tests_disabled = False [func_test] # Sample config for Swift with tempauth auth_uri = http://127.0.0.1:8080/auth/v1.0 # Sample config for Swift with Keystone v2 API. # For keystone v2 change auth_version to 2 and auth_prefix to /v2.0/. # And "allow_account_management" should not be set "true". #auth_version = 3 #auth_uri = https://:5000/v3/ # Used by s3api functional tests, which don't contact auth directly #s3_storage_url = http://127.0.0.1:8080/ #s3_region = us-east-1 # Primary functional test account (needs admin access to the account) account = test username = tester password = testing s3_access_key = test:tester s3_secret_key = testing # User on a second account (needs admin access to the account) account2 = test2 username2 = tester2 password2 = testing2 # User on same account as first, but without admin access username3 = tester3 password3 = testing3 # s3api requires the same account with the primary one and different users # one swift owner: s3_access_key2 = test:tester2 s3_secret_key2 = testing2 # one unprivileged: s3_access_key3 = test:tester3 s3_secret_key3 = testing3 # Fourth user is required for keystone v3 specific tests. # Account must be in a non-default domain. #account4 = test4 #username4 = tester4 #password4 = testing4 #domain4 = test-domain # Fifth user is required for service token-specific tests. # The account must be different from the primary test account. # The user must not have a group (tempauth) or role (keystoneauth) on # the primary test account. The user must have a group/role that is unique # and not given to the primary tester and is specified in the options # <prefix>_require_group (tempauth) or <prefix>_service_roles (keystoneauth). #account5 = test5 #username5 = tester5 #password5 = testing5 # The service_prefix option is used for service token-specific tests. # If service_prefix or username5 above is not supplied, the tests are skipped. # To set the value and enable the service token tests, look at the # reseller_prefix option in /etc/swift/proxy-server.conf. There must be at # least two prefixes. If not, add a prefix as follows (where we add SERVICE): # reseller_prefix = AUTH, SERVICE # The service_prefix must match the <prefix> used in <prefix>_require_group # (tempauth) or <prefix>_service_roles (keystoneauth); for example: # SERVICE_require_group = service # SERVICE_service_roles = service # Note: Do not enable service token tests if the first prefix in # reseller_prefix is the empty prefix AND the primary functional test # account contains an underscore. #service_prefix = SERVICE # Sixth user is required for access control tests. # Account must have a role for reseller_admin_role(keystoneauth). #account6 = test #username6 = tester6 #password6 = testing6 collate = C # Only necessary if a pre-existing server uses self-signed certificate insecure = no # Tests that are dependent on domain_remap middleware being installed also # require one of the domain_remap storage_domain values to be specified here, # otherwise those tests will be skipped. storage_domain = [unit_test] fake_syslog = False [probe_test] # check_server_timeout = 30 # subprocess_wait_timeout = 30 # validate_rsync = false # proxy_base_url = https://:8080 [swift-constraints] # The functional test runner will try to use the constraint values provided in # the swift-constraints section of test.conf. # # If a constraint value does not exist in that section, or because the # swift-constraints section does not exist, the constraints values found in # the /info API call (if successful) will be used. # # If a constraint value cannot be found in the /info results, either because # the /info API call failed, or a value is not present, the constraint value # used will fall back to those loaded by the constraints module at time of # import (which will attempt to load /etc/swift/swift.conf, see the # swift.common.constraints module for more information). # # Note that the cluster must have "sane" values for the test suite to pass # (for some definition of sane). # #max_file_size = 5368709122 #max_meta_name_length = 128 #max_meta_value_length = 256 #max_meta_count = 90 #max_meta_overall_size = 4096 #max_header_size = 8192 #extra_header_count = 0 #max_object_name_length = 1024 #container_listing_limit = 10000 #account_listing_limit = 10000 #max_account_name_length = 256 #max_container_name_length = 256 # Newer swift versions default to strict cors mode, but older ones were the # opposite. #strict_cors_mode = true
配置 Swift 的环境变量¶
在下面添加一个用于运行测试的环境变量
echo "export SWIFT_TEST_CONFIG_FILE=/etc/swift/test.conf" >> $HOME/.bashrc
确保您的
PATH包含bin目录echo "export PATH=${PATH}:$HOME/bin" >> $HOME/.bashrc如果您正在使用环回设备进行 Swift 存储,请添加一个环境变量,将
/dev/sdb1替换为/srv/swift-diskecho "export SAIO_BLOCK_DEVICE=/srv/swift-disk" >> $HOME/.bashrc
如果您正在使用除
/dev/sdb1以外的设备进行 Swift 存储(例如,/dev/vdb1),请添加一个环境变量来替换它echo "export SAIO_BLOCK_DEVICE=/dev/vdb1" >> $HOME/.bashrc
如果您正在使用除
/tmp以外的位置进行 Swift tmp 数据(例如,/mnt/tmp),请添加TMPDIR环境变量来设置它export TMPDIR=/mnt/tmp echo "export TMPDIR=/mnt/tmp" >> $HOME/.bashrc
将上述环境变量导入您的当前环境
. $HOME/.bashrc
构造初始环¶
使用提供的脚本构造初始环
remakerings提供的
remakerings脚本如下#!/bin/bash set -e cd /etc/swift rm -f *.builder *.ring.gz backups/*.builder backups/*.ring.gz swift-ring-builder object.builder create 10 3 1 swift-ring-builder object.builder add r1z1-127.0.0.1:6210/sdb1 1 swift-ring-builder object.builder add r1z2-127.0.0.2:6220/sdb2 1 swift-ring-builder object.builder add r1z3-127.0.0.3:6230/sdb3 1 swift-ring-builder object.builder add r1z4-127.0.0.4:6240/sdb4 1 swift-ring-builder object.builder rebalance swift-ring-builder object-1.builder create 10 2 1 swift-ring-builder object-1.builder add r1z1-127.0.0.1:6210/sdb1 1 swift-ring-builder object-1.builder add r1z2-127.0.0.2:6220/sdb2 1 swift-ring-builder object-1.builder add r1z3-127.0.0.3:6230/sdb3 1 swift-ring-builder object-1.builder add r1z4-127.0.0.4:6240/sdb4 1 swift-ring-builder object-1.builder rebalance swift-ring-builder object-2.builder create 10 6 1 swift-ring-builder object-2.builder add r1z1-127.0.0.1:6210/sdb1 1 swift-ring-builder object-2.builder add r1z1-127.0.0.1:6210/sdb5 1 swift-ring-builder object-2.builder add r1z2-127.0.0.2:6220/sdb2 1 swift-ring-builder object-2.builder add r1z2-127.0.0.2:6220/sdb6 1 swift-ring-builder object-2.builder add r1z3-127.0.0.3:6230/sdb3 1 swift-ring-builder object-2.builder add r1z3-127.0.0.3:6230/sdb7 1 swift-ring-builder object-2.builder add r1z4-127.0.0.4:6240/sdb4 1 swift-ring-builder object-2.builder add r1z4-127.0.0.4:6240/sdb8 1 swift-ring-builder object-2.builder rebalance swift-ring-builder container.builder create 10 3 1 swift-ring-builder container.builder add r1z1-127.0.0.1:6211/sdb1 1 swift-ring-builder container.builder add r1z2-127.0.0.2:6221/sdb2 1 swift-ring-builder container.builder add r1z3-127.0.0.3:6231/sdb3 1 swift-ring-builder container.builder add r1z4-127.0.0.4:6241/sdb4 1 swift-ring-builder container.builder rebalance swift-ring-builder account.builder create 10 3 1 swift-ring-builder account.builder add r1z1-127.0.0.1:6212/sdb1 1 swift-ring-builder account.builder add r1z2-127.0.0.2:6222/sdb2 1 swift-ring-builder account.builder add r1z3-127.0.0.3:6232/sdb3 1 swift-ring-builder account.builder add r1z4-127.0.0.4:6242/sdb4 1 swift-ring-builder account.builder rebalance
您可以预期此命令的输出如下。请注意,为了测试 SAIO 环境中的存储策略和 EC,创建了 3 个对象环。EC 环是唯一一个拥有所有 8 个设备的环。还有两个复制环,一个用于 3 倍复制,另一个用于 2 倍复制,但这些环仅使用 4 个设备
Device d0r1z1-127.0.0.1:6210R127.0.0.1:6210/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6220R127.0.0.2:6220/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6230R127.0.0.3:6230/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6240R127.0.0.4:6240/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6210R127.0.0.1:6210/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6220R127.0.0.2:6220/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6230R127.0.0.3:6230/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6240R127.0.0.4:6240/sdb4_"" with 1.0 weight got id 3 Reassigned 2048 (200.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6210R127.0.0.1:6210/sdb1_"" with 1.0 weight got id 0 Device d1r1z1-127.0.0.1:6210R127.0.0.1:6210/sdb5_"" with 1.0 weight got id 1 Device d2r1z2-127.0.0.2:6220R127.0.0.2:6220/sdb2_"" with 1.0 weight got id 2 Device d3r1z2-127.0.0.2:6220R127.0.0.2:6220/sdb6_"" with 1.0 weight got id 3 Device d4r1z3-127.0.0.3:6230R127.0.0.3:6230/sdb3_"" with 1.0 weight got id 4 Device d5r1z3-127.0.0.3:6230R127.0.0.3:6230/sdb7_"" with 1.0 weight got id 5 Device d6r1z4-127.0.0.4:6240R127.0.0.4:6240/sdb4_"" with 1.0 weight got id 6 Device d7r1z4-127.0.0.4:6240R127.0.0.4:6240/sdb8_"" with 1.0 weight got id 7 Reassigned 6144 (600.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6211R127.0.0.1:6211/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6221R127.0.0.2:6221/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6231R127.0.0.3:6231/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6241R127.0.0.4:6241/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00 Device d0r1z1-127.0.0.1:6212R127.0.0.1:6212/sdb1_"" with 1.0 weight got id 0 Device d1r1z2-127.0.0.2:6222R127.0.0.2:6222/sdb2_"" with 1.0 weight got id 1 Device d2r1z3-127.0.0.3:6232R127.0.0.3:6232/sdb3_"" with 1.0 weight got id 2 Device d3r1z4-127.0.0.4:6242R127.0.0.4:6242/sdb4_"" with 1.0 weight got id 3 Reassigned 3072 (300.00%) partitions. Balance is now 0.00. Dispersion is now 0.00
了解更多关于存储策略和您的 SAIO 的信息 将存储策略添加到现有的 SAIO
测试 Swift¶
验证单元测试是否运行
$HOME/swift/.unittests
请注意,单元测试不需要任何 Swift 守护进程正在运行。
启动“主”Swift 守护进程(代理、账户、容器和对象)
startmain(“
无法 增加 文件 描述符 限制。 以 非 root 用户身份运行?”警告是预期的,并且可以接受。)startmain脚本如下所示#!/bin/bash set -e swift-init main start
获取
X-Storage-Url和X-Auth-Tokencurl -v -H 'X-Storage-User: test:tester' -H 'X-Storage-Pass: testing' http://127.0.0.1:8080/auth/v1.0
检查您是否可以
GET账户curl -v -H 'X-Auth-Token: <token-from-x-auth-token-above>' <url-from-x-storage-url-above>
检查由 python-swiftclient 提供的
swift命令是否有效swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
验证功能测试是否运行
$HOME/swift/.functests
(注意:功能测试将首先删除配置账户中的所有内容。)
验证探测测试是否运行
$HOME/swift/.probetests
(注意:探测测试将重置您的环境,因为它们为每个测试调用
resetswift。)
调试问题¶
如果一切没有按计划进行,并且测试失败,或者您无法进行身份验证,或者某些东西无法正常工作,以下是一些查找问题的良好起点
所有内容都使用系统设施进行记录 - 通常在
/var/log/syslog中,但可能在例如 Fedora 上的/var/log/messages中 - 因此,这是查找错误的好地方(最可能是 python 跟踪)。确保所有服务器进程都在运行。对于基本功能,代理、账户、容器和对象服务器应该都在运行。
如果其中一个服务器未运行,并且未将任何错误记录到 syslog,尝试手动启动服务器可能有用,例如:
swift-object-server /etc/swift/object-server/1.conf将启动对象服务器。如果在 syslog 中没有显示问题,那么您可能会在启动时看到跟踪。如果需要,您可以关闭单元测试的 syslog。这对于
/dev/log不可用或无法限制速率的环境非常有用(单元测试会非常快速地生成大量日志)。打开SWIFT_TEST_CONFIG_FILE指向的文件,并将fake_syslog的值更改为True。如果在检查您是否可以
GET账户的步骤 12 中遇到401 Unauthorized,请使用sudo service memcached status并检查 memcache 是否正在运行。如果 memcache 未运行,请使用sudo service memcached start启动它。一旦 memcache 正在运行,重新运行GET账户。
已知问题¶
以下列出了一些在使用或测试您的 SAIO 时您可能会遇到的“陷阱”
fallocate_reserve - 在大多数情况下,SAIO 没有非常大的 XFS 分区,因此启用 fallocate 并且 fallocate_reserve 已设置可能会导致问题,尤其是在尝试运行功能测试时。因此,fallocate 已在 SAIO 中的对象服务器上关闭。如果您想使用 fallocate_reserve 设置,请知道功能测试将失败,除非您将 max_file_size 约束更改为更合理的值(默认值为 5G)。理想情况下,您应该将其设置为 XFS 文件系统大小的 1/4,以便测试能够通过。