codehaus


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CloudStack Ceph Storage experiments / help requested


I removed zone-wide Ceph storage and created Cluster Wide. Now it works. I
traced the problem to

https://github.com/apache/cloudstack/blob/68b4b8410138a1a16337ff15ac6260e0ecae9bc0/engine/storage/src/org/apache/cloudstack/storage/allocator/AbstractStoragePoolAllocator.java#L118

Looks like zone-wide Ceph is not supported.

ср, 26 дек. 2018 г. в 15:07, Ivan Kudryavtsev <kudryavtsev_ia@xxxxxxxxx>:

> Hello, colleagues. Happy Merry Christmas to you. Try to play with CS Ceph
> Block Storage and stumbled upon the problem with the deployment of VM to
> Ceph RBD.
>
> ACS 4.11.3
>
> 1. Created ZoneWide RBD Storage which is UP in CS with 'rbd' tag.
> 2. Created SO with 'rbd' storage tag.
>
> Upon deployment I see next logs:
>
> 2018-12-27 02:58:55,691 DEBUG [o.a.c.s.a.ZoneWideStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) ZoneWideStoragePoolAllocator to find storage pool
> 2018-12-27 02:58:55,696 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking if storage pool is suitable, name: null ,poolId:
> 37
> 2018-12-27 02:58:55,699 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Destination pool id: 37
> 2018-12-27 02:58:55,711 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Pool ID for the volume with ID 21022 is null
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Found storage pool HA-CEPH-SSD-R5 of type RBD with
> overprovisioning factor 1
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total over provisioned capacity calculated is 1 *
> 1099511627776
> 2018-12-27 02:58:55,716 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Total capacity of the pool HA-CEPH-SSD-R5 with ID 37 is
> 1099511627776
> 2018-12-27 02:58:55,717 DEBUG [c.c.s.StorageManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) Checking pool: 37 for storage allocation , maxSize :
> 1099511627776, totalAllocatedSize : 10737418240, askingSize : 64424509440,
> allocated disable threshold: 0.85
> 2018-12-27 02:58:55,718 DEBUG [o.a.c.s.a.AbstractStoragePoolAllocator]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) List of pools in descending order of free capacity: []
> 2018-12-27 02:58:55,718 DEBUG [c.c.d.DeploymentPlanningManagerImpl]
> (API-Job-Executor-102:ctx-fe2aab98 job-459769 ctx-bddac228)
> (logid:dd88c73e) No suitable pools found for volume:
> Vol[21022|vm=5641|ROOT] under cluster: 1
>
> So, It looks like that the storage is found (HA-CEPH-SSD-R5). Its size and
> utilization is determined OK, but next an empty set as a result of the
> calculation.
>
> Any help is appreciated.
>
> --
> With best regards, Ivan Kudryavtsev
> Bitworks LLC
> Cell RU: +7-923-414-1515
> Cell USA: +1-201-257-1512
> WWW: http://bitworks.software/ <http://bw-sw.com/>
>
>

-- 
With best regards, Ivan Kudryavtsev
Bitworks LLC
Cell RU: +7-923-414-1515
Cell USA: +1-201-257-1512
WWW: http://bitworks.software/ <http://bw-sw.com/>