I posted this to users@ovirt list, but no response, trying my chance here. ==================================
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ?
The status of this says Done, but I don't see the UI part of this, am I missing something.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ?
thanx, deepak
----- Original Message -----
I posted this to users@ovirt list, but no response, trying my chance here. ==================================
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ?
It's the proper name for the feature as it allows you to support not only shared FS types but also other local FS types (e.g. btrfs), so we adjusted the name to better reflect the feature.
The status of this says Done, but I don't see the UI part of this, am I missing something.
Status says:
Done: Make needed changes in VDSM (http://gerrit.ovirt.org/559) To do: Make needed change in Ovirt-Engine Make needed change in the GUIs
So as you can see vdsm work is done (except for bugs) but the rest is not implemented yet (patches welcome :)
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ?
It is the primary incentive for this feature and definitely correct until we have a dedicated (optimized) implementation for gLuster (which will support the gLuster native client etc).
thanx, deepak
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
On 02/22/2012 07:49 PM, Ayal Baron wrote:
----- Original Message -----
I posted this to users@ovirt list, but no response, trying my chance here. ==================================
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ?
It's the proper name for the feature as it allows you to support not only shared FS types but also other local FS types (e.g. btrfs), so we adjusted the name to better reflect the feature.
The status of this says Done, but I don't see the UI part of this, am I missing something.
Status says:
Done: Make needed changes in VDSM (http://gerrit.ovirt.org/559) To do: Make needed change in Ovirt-Engine Make needed change in the GUIs
So as you can see vdsm work is done (except for bugs) but the rest is not implemented yet (patches welcome :)
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ?
It is the primary incentive for this feature and definitely correct until we have a dedicated (optimized) implementation for gLuster (which will support the gLuster native client etc).
When you say native gluster client, do you mean coming up with a libglusterfs python bindings and integration them as a module in vdsm ?
thanx, deepak
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
When you say native gluster client, do you mean coming up with a libglusterfs python bindings and integration them as a module in vdsm ?
I mean that gLuster works as either NFS (in which case it uses a proxy server which is not optimal) or it has its own client which knows how to find the correct host to contact whenever you need to access a file. Using this however, afaiu, would require using native gluster commands which would require dedicated integration in vdsm.
thanx, deepak
vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel
I tried a very simple vdsm cli py program, and getting error 469 ( perm denied storage excp).
This is my simple program (just trying to see if the mount of glusterfs works )
NOTE: the stuff needed to configure a glusterfs volume named 'myvol' is done manually before executing the below code.
#!/usr/bin/python # GPLv2+
import sys import uuid import time
sys.path.append('/usr/share/vdsm')
import vdscli from storage.sd import SHAREDFS_DOMAIN, DATA_DOMAIN, ISO_DOMAIN from storage.volume import COW_FORMAT, SPARSE_VOL, LEAF_VOL, BLANK_UUID
spUUID = str(uuid.uuid4()) sdUUID = str(uuid.uuid4()) imgUUID = str(uuid.uuid4()) volUUID = str(uuid.uuid4())
print "spUUID = %s"%spUUID print "sdUUID = %s"%sdUUID print "imgUUID = %s"%imgUUID print "volUUID = %s"%volUUID
# you should manually create the following directory and # chown vdsm:kvm /tmp/localstoragedomain #mntpath = "/tmp/gluster_mount" gluster_conn = "llm65.in.ibm.com:myvol"
s = vdscli.connect()
masterVersion = 1 hostID = 1
def vdsOK(d): print d if d['status']['code']: raise Exception(str(d)) return d
def waitTask(s, taskid): while vdsOK(s.getTaskStatus(taskid))['taskStatus']['taskState'] != 'finished': time.sleep(3) vdsOK(s.clearTask(taskid))
vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, "my gluster mount", [dict(id=1, spec=gluster_conn, vfs_type="glusterfs", mnt_options="")]))
This is the output...
[root@llm65 vdsm]# ./dpk-sharedfs-vm.py spUUID = 8d7a9581-0eec-4bd3-bbad-89c1b041340b sdUUID = 17e60c50-8349-48c0-a2b7-b1638490298a imgUUID = 7515f471-5053-4011-8a7d-ff266baa7d4b volUUID = 276742d8-2d3b-4fab-99d1-0c6d346bb0fd {'status': {'message': 'OK', 'code': 0}, 'statuslist': [{'status': 469, 'id': 1}]}
Since it was vdsm:kvm who created the mount point.. I am unclear on why it gives 469 strg exception ?
On 03/02/2012 05:09 PM, Deepak C Shetty wrote:
I tried a very simple vdsm cli py program, and getting error 469 ( perm denied storage excp).
This is my simple program (just trying to see if the mount of glusterfs works )
NOTE: the stuff needed to configure a glusterfs volume named 'myvol' is done manually before executing the below code.
#!/usr/bin/python # GPLv2+
import sys import uuid import time
sys.path.append('/usr/share/vdsm')
import vdscli from storage.sd import SHAREDFS_DOMAIN, DATA_DOMAIN, ISO_DOMAIN from storage.volume import COW_FORMAT, SPARSE_VOL, LEAF_VOL, BLANK_UUID
spUUID = str(uuid.uuid4()) sdUUID = str(uuid.uuid4()) imgUUID = str(uuid.uuid4()) volUUID = str(uuid.uuid4())
print "spUUID = %s"%spUUID print "sdUUID = %s"%sdUUID print "imgUUID = %s"%imgUUID print "volUUID = %s"%volUUID
# you should manually create the following directory and # chown vdsm:kvm /tmp/localstoragedomain #mntpath = "/tmp/gluster_mount" gluster_conn = "llm65.in.ibm.com:myvol"
s = vdscli.connect()
masterVersion = 1 hostID = 1
def vdsOK(d): print d if d['status']['code']: raise Exception(str(d)) return d
def waitTask(s, taskid): while vdsOK(s.getTaskStatus(taskid))['taskStatus']['taskState'] != 'finished': time.sleep(3) vdsOK(s.clearTask(taskid))
vdsOK(s.connectStorageServer(SHAREDFS_DOMAIN, "my gluster mount", [dict(id=1, spec=gluster_conn, vfs_type="glusterfs", mnt_options="")]))
This is the output...
[root@llm65 vdsm]# ./dpk-sharedfs-vm.py spUUID = 8d7a9581-0eec-4bd3-bbad-89c1b041340b sdUUID = 17e60c50-8349-48c0-a2b7-b1638490298a imgUUID = 7515f471-5053-4011-8a7d-ff266baa7d4b volUUID = 276742d8-2d3b-4fab-99d1-0c6d346bb0fd {'status': {'message': 'OK', 'code': 0}, 'statuslist': [{'status': 469, 'id': 1}]}
Since it was vdsm:kvm who created the mount point.. I am unclear on why it gives 469 strg exception ?
My bad, forgot to paste the vdsm.log... here it is ...
from vdsm.log -------------
Thread-14::INFO::2012-03-02 22:28:38,275::storage_connection::146::Storage.ServerConnection::(connect) Request to connect SHAREDFS storage server Thread-14::DEBUG::2012-03-02 22:28:38,276::mount::111::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /bin/mount -t glusterfs llm65.in.ibm.com:myvol /rhev/data-center/mnt/llm65.in.ibm.com:myvol' (cwd None) Thread-14::DEBUG::2012-03-02 22:28:43,974::storage_connection::224::Storage.ServerConnection::(__connectSharedFS) Unmounting file system llm65.in.ibm.com:myvol (not enough access permissions) Thread-14::DEBUG::2012-03-02 22:28:43,976::mount::111::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n /bin/umount /rhev/data-center/mnt/llm65.in.ibm.com:myvol' (cwd None) Thread-14::ERROR::2012-03-02 22:28:44,007::storage_connection::133::Storage.ServerConnection::(__processConnections) Error during storage connection operation: Traceback (most recent call last): File "/usr/share/vdsm/storage/storage_connection.py", line 129, in __processConnections rc = func(con) File "/usr/share/vdsm/storage/storage_connection.py", line 221, in __connectSharedFS validateDirAccess(mnt.fs_file) File "/usr/share/vdsm/storage/storage_connection.py", line 40, in validateDirAccess getProcPool().fileUtils.validateAccess(dirPath) File "/usr/share/vdsm/storage/processPool.py", line 53, in wrapper return self.runExternally(func, *args, **kwds) File "/usr/share/vdsm/storage/processPool.py", line 64, in runExternally return self._procPool.runExternally(*args, **kwargs) File "/usr/share/vdsm/storage/processPool.py", line 154, in runExternally raise err StorageServerAccessPermissionError: Permission settings on the specified path do not allow access to the storage. Verify permission settings on the specified storage path.: 'path = /rhev/data-center/mnt/llm65.in.ibm.com:myvol'
I even tried doing chmod 777 the mntPath.. (see below) still it did not help, the same error as above
chmod 777 /rhev/data-center/mnt/llm65.in.ibm.com:myvol [root@llm65 vdsm]# ls -l /rhev/data-center/mnt/| grep myvol ls: cannot access /rhev/data-center/mnt/llm19.in.ibm.com:_tmp_iso-domain-4-fcdc: Stale NFS file handle drwxrwxrwx. 2 vdsm kvm 4096 Mar 2 22:28 llm65.in.ibm.com:myvol
vdsm-devel@lists.stg.fedorahosted.org