1300 046 668 info@innotta.com.au
Powered by Ceph Storage

INNOTTA.STORE

True Scale-Out Block and Object Storage Solution Powered by Ceph Storage

Robust, proven, open and cloud-scale

LEARN MORE

INNOTTA.STORE is a unified, distributed, massively scalable, self-healing, software defined storage solution designed for excellent performance, reliability and availability. It leverages commodity hardware—saving you on cost and giving you flexibility. INNOTTA.STORE uses the same design principles and technologies that power IT innovators such as Google®, Facebook®, and Amazon Web Services®. It tailors these to address the storage needs of small to medium enterprises. The modular and scale-out capable building-block design allows your organization to start with small deployments and grow incrementally into very large storage cluster installations. INNOTTA.STORE’s use cases range from high Performance Tier1 storage for business critical applications to near line storage for data backup and archival purposes.

N

Enterprises today struggle to manage the explosive growth of data while remaining agile and cost-effective. To manage petabytes of data at the speed required by today’s business, enterprises must use cloud technology to store their data. Designed for the cloud, Ceph Storage significantly lowers the cost of storing enterprise data and helps manage exponential data growth — efficiently, automatically, and economically.

N

Ceph’s CRUSH algorithm liberates storage clusters from the scalability and performance limitations imposed by centralized data table mapping. It replicates and re-balances data within the cluster dynamically— eliminating this tedious task for administrators while delivering high performance and infinite scalability.

FEATURES

R

EXABYTE SCALABILITY

  • Scale-out architecture. Grow a cluster from one node to thousands.
  • Automatic rebalancing. Use a peer-to-peer architecture to add capacity at any time with minimal operational effort. Say goodbye to forklift upgrades and data migration projects.
  • Hot or phased software upgrades. Upgrade clusters in phases with no or minimal downtime.
R

SECURITY

  • Access control lists. Exert granular control over object storage user and bucket-level permissions.
  • Quotas. Prevent abuse with pool or object user storage limits.
  • Dynamic block resizing. Expand or shrink Ceph block devices with no or minimal downtime.
  • Striping, erasure coding, or replication across nodes. Enjoy data durability, high availability, and high performance.
  • Storage policies. Configure placement to reflect SLAs, performance requirements, and failure domains.
  • Data placement. Use the CRUSH algorithm to allow every client to calculate where data is located without needing lookup tables and speed performance.
  • Automatic failover. Prevent server or disk failures from impacting data integrity, availability, or performance.
R

PERFORMANCE

  • Copy-on-write cloning. Provision virtual machine (VM) images quickly (block only).
  • In-memory client-side caching. Cache both kernel and hypervisor (block only).
  • Improved parallelism for data I/O. Leverage a client-cluster model instead of a client-server one.
  • Cache tiering. Promote hot data to SSDs with expiration policies.
  • Flash journals. Enhance the write performance of data.
  • Customizable stripe sizes.
R

MULTI-DATACENTER SUPPORT AND DISASTER RECOVERY

  • Zones and region support. Deploy topologies similar to Amazon Web Services S3, and others, with a global namespace (object only).
  • Read affinity. Serve local copies of data to local users (object only).
  • Datacenter synchronization. Back up full or partial sets of data between regions (object only).
  • Export snapshots to geographically dispersed datacenters. Institute disaster recovery (block only).
  • Export incremental snapshots. Minimize network bandwidth (block only).
R

COST-EFFECTIVENESS

  • Thin provisioning. Allow for over-provisioning (block only).
  • Commodity hardware. Tailor the price/performance mix to the workload.
  • Heterogeneous hardware. Avoid having to replace older nodes as newer ones are added.
  • Erasure coding. Enjoy the value of a cost-effective data durability option.
  • Ceph software is completely open source (LGPLV2) so there is no cost to acquire Ceph software.
R

VARIETY of STORAGE PROTOCOLS SUPPORTED

  • Ceph Storage natively supports the RBD protocol for block storage and the Openstack® Swift and Amazon Web Services® S3.
  • INNOTTA.STORE extends these capabilities to include ISCSI, NFS and CIFS – all designed and configured to be highly available like the rest of the storage solution.
R

PRE and POST IMPLEMENTATION SUPPORT

  • Through its INNOTTA.STORE offering, INNOTTA will help you quickly and cost effectively get up and running with Ceph Storage.
  • After the implementation is complete, INNOTTA support will provide ongoing long term support for your Ceph cluster(s).

Architecture Overview

The Ceph Storage system is founded on the Ceph Storage Cluster which is also known as RADOS (short for Reliable, Autonomic, and Distributed Object Store), the Ceph Storage Cluster is a massively scalable and flexible object store with tightly integrated applications for a variety of storage needs. By using the powerful CRUSH algorithm (short for Controlled Replication Under Scalable Hashing) to optimize data placement, the Ceph Storage Cluster is self-managing and self-healing. A RESTful interface is provided by via the Ceph Object Gateway (RGW) and virtual disks are provisioned through the Ceph Block Device (RBD).
Ceph Storage
Ceph is an open-source distributed petascale storage stack offering object storage and block storage. It offers massive scalability, configurable synchronous replication, and n-way redundancy. We offer on-site and remote consultancy services around Ceph and use it as the storage engine of choice for INNOTTA.STORE. All of Ceph is 100% open source, everything is LGPL 2.1 licensed.

The Ceph logo is a trademark of Ceph Storage, Inc. INNOTTA is not affiliated with Ceph Storage, Inc.

Request Quotation

Your Name (required)

Your Email (required)

Your Message (required)

captcha

Collateral