zhurongze/devceph

上传不了文件到testpool

Closed this issue · 3 comments

Hello,反馈一个问题,
我在一台服务器上安装了Ubuntu 14.04 cloud LTS系统,
然会在之上使用你写的devceph(master)来一键安装ceph,但是发现在上传文件到testpool的时候一直没有动静,在/var/log/ceph也没有相应的日志。

环境说明

root@cephtest:/var/log/ceph# uname -a
Linux cephtest 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
root@cephtest:/var/log/ceph# cat /etc/issue
Ubuntu 14.04 LTS \n \l

安装日志如下

Reading package lists... Done
Reading package lists... Done
Building dependency tree
Reading state information... Done
uuid is already the newest version.
The following NEW packages will be installed:
  ceph
0 upgraded, 1 newly installed, 0 to remove and 114 not upgraded.
Need to get 0 B/5358 kB of archives.
After this operation, 31.1 MB of additional disk space will be used.
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = "zh_CN:zh",
        LC_ALL = (unset),
        LC_TIME = "zh_CN",
        LC_MONETARY = "zh_CN",
        LC_ADDRESS = "zh_CN",
        LC_TELEPHONE = "zh_CN",
        LC_NAME = "zh_CN",
        LC_MEASUREMENT = "zh_CN",
        LC_IDENTIFICATION = "zh_CN",
        LC_NUMERIC = "zh_CN",
        LC_PAPER = "zh_CN",
        LANG = "zh_CN.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Selecting previously unselected package ceph.
(Reading database ... 56736 files and directories currently installed.)
Preparing to unpack .../ceph_0.80.7-0ubuntu0.14.04.1_amd64.deb ...
Unpacking ceph (0.80.7-0ubuntu0.14.04.1) ...
Processing triggers for ureadahead (0.100.0-16) ...
Processing triggers for man-db (2.6.7.1-1) ...
locale: Cannot set LC_CTYPE to default locale: No such file or directory
locale: Cannot set LC_MESSAGES to default locale: No such file or directory
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up ceph (0.80.7-0ubuntu0.14.04.1) ...
ceph-all start/running
### Cleanup Data Directory
### Generate ceph.conf
### Initialize Ceph
temp dir is /tmp/mkcephfs.AemqkHpapV
preparing monmap in /tmp/mkcephfs.AemqkHpapV/monmap
/usr/bin/monmaptool --create --clobber --add a 192.168.1.115:6789 --print /tmp/mkcephfs.AemqkHpapV/monmap
/usr/bin/monmaptool: monmap file /tmp/mkcephfs.AemqkHpapV/monmap
/usr/bin/monmaptool: generated fsid 98f1c4e2-86c9-490a-8849-c871082ec24c
epoch 0
fsid 98f1c4e2-86c9-490a-8849-c871082ec24c
last_changed 2014-12-02 11:27:16.549822
created 2014-12-02 11:27:16.549822
0: 192.168.1.115:6789/0 mon.a
/usr/bin/monmaptool: writing epoch 0 to /tmp/mkcephfs.AemqkHpapV/monmap (1 monitors)

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
=== osd.0 ===
2014-12-02 11:27:16.899124 7f676ea38800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:17.644916 7f676ea38800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:17.658037 7f676ea38800 -1 filestore(/opt/devceph/data/osd-0) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-12-02 11:27:18.132050 7f676ea38800 -1 created object store /opt/devceph/data/osd-0 journal /opt/devceph/data/osd-0/journal for osd.0 fsid 98f1c4e2-86c9-490a-8849-c871082ec24c
2014-12-02 11:27:18.132100 7f676ea38800 -1 auth: error reading file: /opt/devceph/data/osd-0/keyring: can't open /opt/devceph/data/osd-0/keyring: (2) No such file or directory
2014-12-02 11:27:18.132185 7f676ea38800 -1 created new key in keyring /opt/devceph/data/osd-0/keyring

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
=== osd.1 ===
2014-12-02 11:27:18.466682 7f2ae267b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:19.004158 7f2ae267b800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:19.019033 7f2ae267b800 -1 filestore(/opt/devceph/data/osd-1) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-12-02 11:27:19.507986 7f2ae267b800 -1 created object store /opt/devceph/data/osd-1 journal /opt/devceph/data/osd-1/journal for osd.1 fsid 98f1c4e2-86c9-490a-8849-c871082ec24c
2014-12-02 11:27:19.508041 7f2ae267b800 -1 auth: error reading file: /opt/devceph/data/osd-1/keyring: can't open /opt/devceph/data/osd-1/keyring: (2) No such file or directory
2014-12-02 11:27:19.508134 7f2ae267b800 -1 created new key in keyring /opt/devceph/data/osd-1/keyring

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
=== osd.2 ===
2014-12-02 11:27:19.842493 7f2288dc1800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:20.338221 7f2288dc1800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
2014-12-02 11:27:20.351806 7f2288dc1800 -1 filestore(/opt/devceph/data/osd-2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory
2014-12-02 11:27:20.816777 7f2288dc1800 -1 created object store /opt/devceph/data/osd-2 journal /opt/devceph/data/osd-2/journal for osd.2 fsid 98f1c4e2-86c9-490a-8849-c871082ec24c
2014-12-02 11:27:20.816828 7f2288dc1800 -1 auth: error reading file: /opt/devceph/data/osd-2/keyring: can't open /opt/devceph/data/osd-2/keyring: (2) No such file or directory
2014-12-02 11:27:20.816918 7f2288dc1800 -1 created new key in keyring /opt/devceph/data/osd-2/keyring

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
Building generic osdmap from /tmp/mkcephfs.AemqkHpapV/conf
/usr/bin/osdmaptool: osdmap file '/tmp/mkcephfs.AemqkHpapV/osdmap'
/usr/bin/osdmaptool: writing epoch 1 to /tmp/mkcephfs.AemqkHpapV/osdmap
Generating admin key at /tmp/mkcephfs.AemqkHpapV/keyring.admin
creating /tmp/mkcephfs.AemqkHpapV/keyring.admin
Building initial monitor keyring
added entity osd.0 auth auth(auid = 18446744073709551615 key=AQCWMX1UKMPfBxAAIBtYwJjrzwp+1d0yfBZArA== with 0 caps)
added entity osd.1 auth auth(auid = 18446744073709551615 key=AQCXMX1UsCpIHhAAMQIpyDEmvewF34lIwU0BHg== with 0 caps)
added entity osd.2 auth auth(auid = 18446744073709551615 key=AQCYMX1U6OGvMBAAJeYRBgDTqpgEk1J3zmcCXA== with 0 caps)

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
=== mon.a ===
/usr/bin/ceph-mon: created monfs at /opt/devceph/data/mon-a for mon.a

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
 http://github.com/ceph/ceph-deploy
placing client.admin keyring in /etc/ceph/ceph.client.admin.keyring
### Install Over

你能够找到 /opt/devceph/data/osd-1/keyring 这个文件吗?

卸载了重新安装之后,可以找到 /opt/devceph/data/osd-1/keyring
我注意到:

root@cephtest:/opt/devceph# ./devceph.sh start
### Start Ceph Service
=== mon.a ===
Starting Ceph mon.a on cephtest...
=== osd.0 ===
Error ENOENT: osd.0 does not exist.  create it before updating the crush map
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.0 --keyring=/opt/devceph/data/osd-0/keyring osd crush create-or-move -- 0 0.87 host=cephtest root=default'
### Seting Iptables

上面启动的问题解决了,就是需要执行:

ceph osd create

有多少个节点就执行几次,然后启动之后执行创建存储池,上传文件没有问题。

root@cephtest:/opt/devceph# ./devceph.sh start
### Start Ceph Service
=== mon.a ===
Starting Ceph mon.a on cephtest...already running
=== osd.0 ===
Starting Ceph osd.0 on cephtest...already running
=== osd.1 ===
create-or-move updated item name 'osd.1' weight 0.87 at location {host=cephtest,root=default} to crush map
Starting Ceph osd.1 on cephtest...
starting osd.1 at :/0 osd_data /opt/devceph/data/osd-1 /opt/devceph/data/osd-1/journal
=== osd.2 ===
create-or-move updated item name 'osd.2' weight 0.87 at location {host=cephtest,root=default} to crush map
Starting Ceph osd.2 on cephtest...
starting osd.2 at :/0 osd_data /opt/devceph/data/osd-2 /opt/devceph/data/osd-2/journal
### Seting Iptables
root@cephtest:/opt/devceph# ceph osd pool create testpool 96 96
pool 'testpool' created
root@cephtest:/opt/devceph# rados --pool=testpool put README.md README.md
root@cephtest:/opt/devceph#
root@cephtest:/opt/devceph# rados --pool=testpool ls
README.md

需不需要把这个操作写到README.md?
还有安装时的警告信息需要处理下么?

WARNING: mkcephfs is now deprecated in favour of ceph-deploy. Please see:
http://github.com/ceph/ceph-deploy