1300 046 668 info@innotta.com.au
Powered by Ceph Storage


True Scale-Out Block and Object Storage Solution Powered by Ceph Storage

Robust, proven, open and cloud-scale

Innostore is a unified, distributed, massively scalable, self-healing, software defined storage solution designed for excellent performance, reliability and availability. It leverages commodity hardware—saving you on cost and giving you flexibility. Innostore uses the same design principles and technologies that power IT innovators such as Google®, Facebook®, and Amazon Web Services®. It tailors these to address the storage needs of small to medium enterprises. The modular and scale-out capable building-block design allows your organization to start with small deployments and grow incrementally into very large storage cluster installations. Innostore’s use cases range from high Performance Tier1 storage for business critical applications to near line storage for data backup and archival purposes.


Enterprises today struggle to manage the explosive growth of data while remaining agile and cost-effective. To manage petabytes of data at the speed required by today’s business, enterprises must use cloud technology to store their data. Designed for the cloud, Ceph Storage significantly lowers the cost of storing enterprise data and helps manage exponential data growth — efficiently, automatically, and economically.


Ceph’s CRUSH algorithm liberates storage clusters from the scalability and performance limitations imposed by centralized data table mapping. It replicates and re-balances data within the cluster dynamically— eliminating this tedious task for administrators while delivering high performance and infinite scalability.




  • Scale-out architecture. Grow a cluster from one node to thousands.
  • Automatic rebalancing. Use a peer-to-peer architecture to add capacity at any time with minimal operational effort. Say goodbye to forklift upgrades and data migration projects.
  • Hot or phased software upgrades. Upgrade clusters in phases with no or minimal downtime.


  • Access control lists. Exert granular control over object storage user and bucket-level permissions.
  • Quotas. Prevent abuse with pool or object user storage limits.
  • Dynamic block resizing. Expand or shrink Ceph block devices with no or minimal downtime.
  • Striping, erasure coding, or replication across nodes. Enjoy data durability, high availability, and high performance.
  • Storage policies. Configure placement to reflect SLAs, performance requirements, and failure domains.
  • Data placement. Use the CRUSH algorithm to allow every client to calculate where data is located without needing lookup tables and speed performance.
  • Automatic failover. Prevent server or disk failures from impacting data integrity, availability, or performance.


  • Copy-on-write cloning. Provision virtual machine (VM) images quickly (block only).
  • In-memory client-side caching. Cache both kernel and hypervisor (block only).
  • Improved parallelism for data I/O. Leverage a client-cluster model instead of a client-server one.
  • Cache tiering. Promote hot data to SSDs with expiration policies.
  • Flash journals. Enhance the write performance of data.
  • Customizable stripe sizes.


  • Zones and region support. Deploy topologies similar to Amazon Web Services S3, and others, with a global namespace (object only).
  • Read affinity. Serve local copies of data to local users (object only).
  • Datacenter synchronization. Back up full or partial sets of data between regions (object only).
  • Export snapshots to geographically dispersed datacenters. Institute disaster recovery (block only).
  • Export incremental snapshots. Minimize network bandwidth (block only).


  • Thin provisioning. Allow for over-provisioning (block only).
  • Commodity hardware. Tailor the price/performance mix to the workload.
  • Heterogeneous hardware. Avoid having to replace older nodes as newer ones are added.
  • Erasure coding. Enjoy the value of a cost-effective data durability option.
  • Ceph software is completely open source (LGPLV2) so there is no cost to acquire Ceph software.


  • Ceph Storage natively supports the RBD protocol for block storage and the Openstack® Swift and Amazon Web Services® S3.
  • Innostore extends these capabilities to include ISCSI, NFS and CIFS – all designed and configured to be highly available like the rest of the storage solution.


  • Through its Innostore offering, INNOTTA will help you quickly and cost effectively get up and running with Ceph Storage.
  • After the implementation is complete, INNOTTA support will provide ongoing long term support for your Ceph cluster(s).

Architecture Overview

The Ceph Storage system is founded on the Ceph Storage Cluster which is also known as RADOS (short for Reliable, Autonomic, and Distributed Object Store), the Ceph Storage Cluster is a massively scalable and flexible object store with tightly integrated applications for a variety of storage needs. By using the powerful CRUSH algorithm (short for Controlled Replication Under Scalable Hashing) to optimize data placement, the Ceph Storage Cluster is self-managing and self-healing. A RESTful interface is provided by via the Ceph Object Gateway (RGW) and virtual disks are provisioned through the Ceph Block Device (RBD).

Ceph Storage

Ceph is an open-source distributed petascale storage stack offering object storage and block storage. It offers massive scalability, configurable synchronous replication, and n-way redundancy. We offer on-site and remote consultancy services around Ceph and use it as the storage engine of choice for Innostore. All of Ceph is 100% open source, everything is LGPL 2.1 licensed.

The Ceph logo is a trademark of Ceph Storage, Inc. INNOTTA is not affiliated with Ceph Storage, Inc.

Frequently Asked Questions

What's Innostore?

Innostore is a true scale-out block and object storage solution powered by Ceph Storage. It provides a unified, distributed, massively scalable, self-healing, software defined storage solution designed for excellent performance, reliability and availability. Its use cases range from high performance tier1 storage for business critical applications to near line storage for data backup and archival purposes.

What software is Innostore based on?

Innostore is based on open source Ceph storage and storage protocols embedded within the Linux kernel. In addition to these open source components, we have developed our own performance monitoring, alerting and administration tools which we provide to our clients who have subscribed to Innostore support.

What storage protocols does Innostore support?

For block storage, Innostore supports the native RBD protocol and iSCSI
For object storage, Innostore supports S3 and Swift

What's included in Innostore support?

• We test new Ceph releases and their integration with the rest of the software stack in our lab environment and advise our clients the recommended upgrade paths and steps
• We are part of the Ceph community from which we get a wealth of information about advances and various deployments around the world which we pass on to our clients
• We develop and maintain easy to use admin guides and cheat sheets that we pass on to our clients
• We gather performance and health metrics form our clients’ Ceph clusters and send these metrics to a cloud based data warehouse maintained by us to facilitate altering, problem resolution, trending and capacity planning
• We proactively notify our clients of potential availability/performance/capacity issues and work with them to avoid these issues
• We assist our clients in upgrading their Innostore software
• We assist our clients in replacing faulty storage nodes and drives
• We assist our clients in expanding their clusters

During what times is Innostore support available?

Innostore support is available from 9AM to 6PM Monday to Friday AEST

How much does Innostore support cost?

The monthly fee associated with Innostore support is dependent on the number of OSD nodes in your Ceph cluster. Please contact sales for a no obligation quote.

Request Quotation

Your Name (required)

Your Email (required)

Your Message (required)