Hi Team,
I am coding SMI-S client plugin for libstoragemgmt. I tried openlmi-storage against it, and noticed openlmi-storage is not following SNIA SMI-S standard at all: * CIM_StorageConfigurationService does not associated to any CIM_ComputerSystem. * CIM_StorageVolume is not supported.
I am wondering: will openlmi-storage follow the SNIA SMI-S standard?
If so, these two could be a good start: * DMTF DSP1033 Profile Registration * SNIA 1.6rev4 Block Book, Array Profile
Thanks.
On 10/25/2013 12:31 PM, Gris Ge wrote:
Hi Team,
I am coding SMI-S client plugin for libstoragemgmt. I tried openlmi-storage against it, and noticed openlmi-storage is not following SNIA SMI-S standard at all:
- CIM_StorageConfigurationService does not associated to any CIM_ComputerSystem.
That looks like a bug, association to CIM_ComputerSystem is supported. What version of openlmi-storage do you use?
- CIM_StorageVolume is not supported.
Well, as CIM defines StorageVolume and LogicalDisk, there is no way, how to distinguis these on Linux. Any block device can be either exported using iSCSI (=is StorageVolume) or can be formatted with a filesystem (=is LogicalDisk) or both (!). Thus we've chosen to use StorageExtents for everything. Later, if we add iSCSI target configuration, we might expose StorageVolumes.
I am wondering: will openlmi-storage follow the SNIA SMI-S standard? If so, these two could be a good start:
- DMTF DSP1033 Profile Registration
This one should be implemented in OpenLMI-0.6.0, fill a bug if there is something wrong.
- SNIA 1.6rev4 Block Book, Array Profile
OpenLMI storage _currently_ focuses on configuration of local storage, while SMI-S aims at remote SAN/NAS management, having only thin Book 6: Host Elements about configuration of actual hosts.
We reuse lof ot SMI-S concepts to configure various Linux stuff on hosst (LVM, MD RAID, ...), still in my opinion SMI-S does not fit well here. (Almost) all SMI-S profiles refer to Block Service package, which is kind of 'heart' of SMI-S and as I understand it, it is _not_ applicable to Linux. Block Service package uses disks just as big chunks of blocks added to Primordial pool, from which you can allocate other pools and volumes/logical disks.
On Linux, we treat disks differently - in the vast majority of cases we partition them. So there is no single pool of all disks from which you can allocate other concrete pools, each disk is treated _individually_ using OpenLMI variant of Disk Parition Subprofile.
I wanted to have this disk - partitions - MD RAID/VG/filesystem/whatever hierarchy _explicit_, I don't want to hide everything behind a primordial pool and create partitions "automatically" as stuff gets allocated from the pool.
In other words, we do not want to create SAN management software, we want to create Linux management SW, with all Linux block devices clearly visible and explicitly created by an application.
Going back to your question, eventually we might have API to export a StorageVolume and it might resemble SMI-S but we're focusing on the local storage for now. Actually, we're thinking about using libstoragemgmt to import remote LUNs (i.e. to configure local iscsi initiator). Configuration of iSCSI target on the manage system is much further on the TODO list.
Jan
DISCLAIMER: I admit I am the only author of OpenLMI Storage CIM API. I've read most of SMI-S, still it's possible that I got it completelly wrong. Please correct me if so, maybe there is better way, how to represent Linux storage using SMI-S.
On Fri, Oct 25, 2013 at 01:30:33PM +0200, Jan Safranek wrote:
That looks like a bug, association to CIM_ComputerSystem is supported. What version of openlmi-storage do you use?
openlmi-storage-0.5.1-2.fc19.noarch.rpm
With openlmi-storage-0.6.0-2.elx, I got python calltrace. Will create a bug for it.
- CIM_StorageVolume is not supported.
Well, as CIM defines StorageVolume and LogicalDisk, there is no way, how to distinguis these on Linux. Any block device can be either exported using iSCSI (=is StorageVolume) or can be formatted with a filesystem (=is LogicalDisk) or both (!). Thus we've chosen to use StorageExtents for everything. Later, if we add iSCSI target configuration, we might expose StorageVolumes.
Noted.
One of libstoragemgmt user are seeking a way to control Linux local storage using OpenLMI-storage via libstoragemgmt API.
Any diagram about LVM management (like SNIA used in SMI-S SPEC PDF) I could use to seek a possibility?
I am wondering: will openlmi-storage follow the SNIA SMI-S standard? If so, these two could be a good start:
- DMTF DSP1033 Profile Registration
This one should be implemented in OpenLMI-0.6.0, fill a bug if there is something wrong.
I reply this in another email.
- SNIA 1.6rev4 Block Book, Array Profile
OpenLMI storage _currently_ focuses on configuration of local storage, while SMI-S aims at remote SAN/NAS management, having only thin Book 6: Host Elements about configuration of actual hosts.
Yes. SNIA currently is lacking OS local storage management. When we implementing our specific way, join SNIA work groups and seeking help from their experts might help us to build a better one.
This is just my quick thoughts about linking Linux term to SNIA term:
MD:
Treat MD as a StoragePool. Using CompositeExtent to represent the RAID layout. Since /dev/mdX can create filesystem on. We can represent /dev/mdX as StorageVolume and create it once StoragePool created. The InstanceID could use the filename in folder '/dev/disk/by-id/'.
The partition of /dev/mdX could be GenericDiskPartition.
For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which could from StorageVolume, DiskDrive, or Even StoragePool. 'Goal' is StorageSetting which defined the RAID Level.
Both ReturnToStoragePool and DeleteStoragePool remove the whole MD.
SCSI Disks:
/dev/sdx as DiskDrive. Each DiskDrive has a Primordial StorageExtent associated representing its storage space. We can follow 'Disk DriveLite Subprofile'.
LVM:
VG as StoragePool. LV as StorageVolume.
For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which could from StorageVolume, DiskDrive, or Even StoragePool. Using 'Goal' for Mirror or Thin Provisioning settings. PV will be automatically created and will be presenting as a StorageExtent.
For CreateOrModifyElementFromStoragePool, like you already did. We create LV -- StorageVolume.
Going back to your question, eventually we might have API to export a StorageVolume and it might resemble SMI-S but we're focusing on the local storage for now. Actually, we're thinking about using libstoragemgmt to import remote LUNs (i.e. to configure local iscsi initiator). Configuration of iSCSI target on the manage system is much further on the TODO list.
With iSCSI target, Linux server would be a Storage Array. And OpenLMI-storage will be the SMI-S provider of Linux Array. That's a quite exciting feature. I have spent quite a lot time on fighting with EMC/IBM/etc SMI-S providers. Let me know If could be helpful.
Jan
DISCLAIMER: I admit I am the only author of OpenLMI Storage CIM API. I've read most of SMI-S, still it's possible that I got it completelly wrong. Please correct me if so, maybe there is better way, how to represent Linux storage using SMI-S.
Likewise. Please do correct me if I misunderstand the SNIA/DMTF SPEC files.
Thanks for detailed reply and maintaining this great project.
On 10/28/2013 03:51 AM, Gris Ge wrote:
On Fri, Oct 25, 2013 at 01:30:33PM +0200, Jan Safranek wrote:
That looks like a bug, association to CIM_ComputerSystem is supported. What version of openlmi-storage do you use?
openlmi-storage-0.5.1-2.fc19.noarch.rpm
With openlmi-storage-0.6.0-2.elx, I got python calltrace. Will create a bug for it.
- CIM_StorageVolume is not supported.
Well, as CIM defines StorageVolume and LogicalDisk, there is no way, how to distinguis these on Linux. Any block device can be either exported using iSCSI (=is StorageVolume) or can be formatted with a filesystem (=is LogicalDisk) or both (!). Thus we've chosen to use StorageExtents for everything. Later, if we add iSCSI target configuration, we might expose StorageVolumes.
Noted.
One of libstoragemgmt user are seeking a way to control Linux local storage using OpenLMI-storage via libstoragemgmt API.
Any diagram about LVM management (like SNIA used in SMI-S SPEC PDF) I could use to seek a possibility?
There is extensive documentation incl. diagrams at http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-stor...
Simplified diagram (without any *Setting classes) of LVM on top of MD RAID: http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-stor...
Full LVM: http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-stor...
Full MD: http://drupal-openlmi.rhcloud.com/sites/default/files/doc/admin/openlmi-stor...
(and you can find partitioning + other stuff there too)
This is just my quick thoughts about linking Linux term to SNIA term:
MD:
Treat MD as a StoragePool. Using CompositeExtent to represent the RAID layout. Since /dev/mdX can create filesystem on. We can represent /dev/mdX as StorageVolume and create it once StoragePool created. The InstanceID could use the filename in folder '/dev/disk/by-id/'. The partition of /dev/mdX could be GenericDiskPartition. For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which could from StorageVolume, DiskDrive, or Even StoragePool. 'Goal' is StorageSetting which defined the RAID Level. Both ReturnToStoragePool and DeleteStoragePool remove the whole MD.
We use variant of Extent Composition Profile with StorageConfigurationService.CreateOrModifyElementFromElements(), I don't think MD is a 'pool', it's just an CompositeStorageExtent. There can be MD containers, which have pool characteristics and various MD arrays can be allocated from them, but we don't implement them for now as I don't think they are widely used.
And we also provide more Linux friendly StorageConfigurationService.CreateOrModifyMDRAID()
SCSI Disks:
/dev/sdx as DiskDrive. Each DiskDrive has a Primordial StorageExtent associated representing its storage space. We can follow 'Disk DriveLite Subprofile'.
We don't implement DriveLite Subprofile yet (on TODO list), but disks are StorageExtents.
LVM:
VG as StoragePool. LV as StorageVolume. For CreateOrModifyStoragePool, 'InExtents' holding the StorageExtent which could from StorageVolume, DiskDrive, or Even StoragePool. Using 'Goal' for Mirror or Thin Provisioning settings. PV will be automatically created and will be presenting as a StorageExtent.
Yes, that's what we do. Apart from InPool argument - what kind of pool would you expect here? Build VG on top of other VG?
For CreateOrModifyElementFromStoragePool, like you already did. We create LV -- StorageVolume.
Going back to your question, eventually we might have API to export a StorageVolume and it might resemble SMI-S but we're focusing on the local storage for now. Actually, we're thinking about using libstoragemgmt to import remote LUNs (i.e. to configure local iscsi initiator). Configuration of iSCSI target on the manage system is much further on the TODO list.
With iSCSI target, Linux server would be a Storage Array. And OpenLMI-storage will be the SMI-S provider of Linux Array. That's a quite exciting feature. I have spent quite a lot time on fighting with EMC/IBM/etc SMI-S providers. Let me know If could be helpful.
Sure, that's excellent idea. However, I'd like to make the local storage right first and then expose it.
On related note, is there any library/service/cmdline tool which can configure iSCSI target, so we could expose local devices?
Jan
DISCLAIMER: I admit I am the only author of OpenLMI Storage CIM API. I've read most of SMI-S, still it's possible that I got it completelly wrong. Please correct me if so, maybe there is better way, how to represent Linux storage using SMI-S.
Likewise. Please do correct me if I misunderstand the SNIA/DMTF SPEC files.
Thanks for detailed reply and maintaining this great project.
On Thu, Oct 31, 2013 at 09:04:08AM +0100, Jan Safranek wrote:
There is extensive documentation incl. diagrams at
(and you can find partitioning + other stuff there too)
Thanks.
Yes, that's what we do. Apart from InPool argument - what kind of pool would you expect here? Build VG on top of other VG?
Assuming you refer to 'InPools' of CreateOrModifyStoragePool.
InPools is the pool you want to allocated space from. Actually every pool is allocated from Primordial StoragePool, so normally we just ignore this parameters. When defined but not Primordial StoragePool, it means new pool will be allocated from defined Pool. It was introduced to SNIA as some array like 'IBM DS 8000' are using mid-level pool to hold disk, and top-level pool(can create volume) to hold RAID info. For that array, top-level Pool should use 'InPools' parameters; mid-level Pool should use 'InExtents' parameters.
AFAIK, LVM does not support that layout. SMI-S provider use StorageConfigurationCapabilities.SupportedStoragePoolFeatures to indicate what argument CreateOrModifyStoragePool() supported.
StorageConfigurationCapabilities can also associated to each StoragePool to indicate specific capabilities. Please check SNIA 1.6rev4, Block Book, PDF Page 68, Figure 9 - Capabilities Specific to a StoragePool
Sure, that's excellent idea. However, I'd like to make the local storage right first and then expose it.
On related note, is there any library/service/cmdline tool which can configure iSCSI target, so we could expose local devices?
Only a cmdline tool -- iscsiadm I has drafted (unmaintained after I leave QE work) some perl module. And I believe current Red Hat iSCSI QE could help you also. Will email you the detail separately.
Thanks.
On Fri, Nov 01, 2013 at 02:13:42PM +0800, Gris Ge wrote:
On related note, is there any library/service/cmdline tool which can configure iSCSI target, so we could expose local devices?
Only a cmdline tool -- iscsiadm
My fault. For iSCSI target, it should be: * tgtadm (scsi-target-utils) * targetcli (LIO, support both FCoE and iSCSI target)
Thanks.
If so, these two could be a good start: * DMTF DSP1033 Profile Registration
pegasus also has already implemented this so as stop repeat work from different groups writing provider, so you can start using right away.
On Fri, Oct 25, 2013 at 4:01 PM, Gris Ge fge@redhat.com wrote:
Hi Team,
I am coding SMI-S client plugin for libstoragemgmt. I tried openlmi-storage against it, and noticed openlmi-storage is not following SNIA SMI-S standard at all:
- CIM_StorageConfigurationService does not associated to any CIM_ComputerSystem.
- CIM_StorageVolume is not supported.
I am wondering: will openlmi-storage follow the SNIA SMI-S standard?
If so, these two could be a good start:
- DMTF DSP1033 Profile Registration
- SNIA 1.6rev4 Block Book, Array Profile
Thanks.
-- Gris Ge
openlmi-devel mailing list openlmi-devel@lists.fedorahosted.org https://lists.fedorahosted.org/mailman/listinfo/openlmi-devel
On Fri, Oct 25, 2013 at 05:05:46PM +0530, Devchandra L Meetei wrote:
If so, these two could be a good start: * DMTF DSP1033 Profile Registration
pegasus also has already implemented this so as stop repeat work from different groups writing provider, so you can start using right away.
I was expecting DSP1033 could help me jump 'root/interop' or 'interop' to OpenLMI 'root/cimv2' and provide me the tree-root 'CIM_ComputerSystem' for SNIA SIM-S 'Block Services' Profile.
I has not CIM provider experience or pegasus, but from a SMIS client view, this is how I understand the DSP1033.
Expected Way: [1] * EnumerateInstances('CIM_RegisteredProfile') in 'interop' namespace. # As DSP1033 state, 'root/interop' is OK while 'interop' is preferred. * Find a profile with RegisteredName == 'Block Services' * Associators(cim_chose_rp.path, AssocClass='CIM_ElementConformsToProfile', ResultRole='ManagedElement') * Then we get a CIM_ComputerSystem which is defined by SNIA SMI-S rev4
OpenLMI Curent status: * EnumerateInstances('CIM_RegisteredProfile') in 'root/interop' namespace * Find a profile with RegisteredName == 'Block Services' which turn out to be pegasus default one. So we should use 'OpenLMI-Storage' * Previous Associators query got NULL.
OpenLMI version I am using: ==== openlmi-networking-0.2.0-1.elx.x86_64 openlmi-storage-0.6.0-2.elx.noarch openlmi-indicationmanager-libs-0.3.0-3.elx.x86_64 openlmi-powermanagement-0.3.0-3.elx.x86_64 openlmi-python-providers-0.3.0-3.elx.noarch openlmi-logicalfile-0.3.0-3.elx.x86_64 openlmi-service-0.3.0-3.elx.x86_64 openlmi-account-0.3.0-3.elx.x86_64 openlmi-python-base-0.3.0-3.elx.noarch openlmi-providers-0.3.0-3.elx.x86_64 ====
Please kindly correct me if I misunderstand the any standard spec which happen a lot.
Thanks for the reply and correction on open-pegasus interop.
Best regards.
[1] I verified that procedure on EMC VNX SMI-S provider and IBM VNX SMI-S provider.
On 10/28/2013 01:58 AM, Gris Ge wrote:
On Fri, Oct 25, 2013 at 05:05:46PM +0530, Devchandra L Meetei wrote:
>
If so, these two could be a good start:
- DMTF DSP1033 Profile Registration
>
pegasus also has already implemented this so as stop repeat work from different groups writing provider, so you can start using right away.
I was expecting DSP1033 could help me jump 'root/interop' or 'interop' to OpenLMI 'root/cimv2' and provide me the tree-root 'CIM_ComputerSystem' for SNIA SIM-S 'Block Services' Profile.
I has not CIM provider experience or pegasus, but from a SMIS client view, this is how I understand the DSP1033.
Expected Way: [1]
- EnumerateInstances('CIM_RegisteredProfile') in 'interop' namespace. # As DSP1033 state, 'root/interop' is OK while 'interop' is preferred.
- Find a profile with RegisteredName == 'Block Services'
This step won't work, we can't claim to implement 'Block Services' profile, we don't implement Primordial Pool and maybe also other mandatory classes. I could add a fake one, which would be read-only, but isn't it against SMI-S?
Jan
openlmi-devel@lists.stg.fedorahosted.org