Functional Overview

Workloads and Use-Cases

ZebClient is developed from the ground up to support modern applications with high-speed access to data and persistent storage for all types of data, at any scale. ZebClient offers seamless scalability, supporting both scale-up and scale-out performance based on the number of CPUs. It leverages both S3 and Azure Blob storage for backend storage. Deploying ZebClient in the cloud greatly improves the cost-performance ratio for cloud computing as well as storage. When deployed for on-premises computing, ZebClient significantly reduces storage costs by replacing local storage with cloud object storage.

Functional Overview

ZebClient is designed as an ultra-fast distributed parallel file system, purpose-built to provide high-speed performance on any scale of data store. It is backed by durable low-cost cloud-based object storage and runs efficiently on cost-effective hardware.

ZebClient implements a next-generation distributed parallel file system by separating the concerns of compute, data, and metadata storage. When using ZebClient to store data, the data itself is persisted in object storage (e.g. Azure Blob Storage or Amazon S3). The corresponding metadata is stored in databases such as Postgres and Redis and automatically backed up in object data storage.

ZebClient seamlessly integrates with advanced analytics, machine learning, artificial intelligence, and other demanding applications to provide massive, elastic, cost-effective, and high-performance data storage without the need for code modifications. With ZebClient, you can leave behind worries about availability, disaster recovery, monitoring, and scaling, thereby reducing operational and maintenance tasks to a minimum.

Highlights

  • High Performance - ZebClient provides extremely high read and write access to data.

  • POSIX-Compliant - ZebClient can be used like a local file system as it seamlessly interfaces with existing applications.

  • Global Namespace - ZebClient allows external data of any size to be viewed and accessed by the filesystem whether data is transferred to it or not.

  • Cluster portability - A ZebClient cluster is very flexible and can after a graceful shut down swiftly be ported to any location or hardware of choice without a move of data

  • ZebClient Unified Data Access Point – ZebClient Unified Data Access Point offers one centralized storage for both structured and unstructured data, simplifying the use and retrieval of any data.

  • Redundancy – Using the Zebware proprietary erasure code ZebEC, ZebClient protects data from the node, drive, memory, and bit-root failure.

  • Availability - ZebClient delivers high-availability access to data under its management.

  • Encryption - In transit, data is encrypted using TLS. For data at rest, you can enable Server Side Encryption SSE provided by your selected CSP. For complete private data encryption please contact the Zebware support team.

  • Kubernetes - Native deployment of ZebClient in Kubernetes is performed via ZebClient Helm CSI chart.

  • Distributed - Each file system can be mounted on multiple VMs simultaneously, allowing for high-performance concurrent reads and write operations, and shared data access.

  • Strong Consistency - Any committed changes in files become immediately visible across all servers, adhering to ACID compliance principles.

  • File Locking - ZebClient supports BSD lock (flock) and POSIX lock (fcntl).

  • Data Tiering - ZebClient uses a configurable data tiering mechanism that automatically places frequently used data into the most suitable tier to ensure high-performance operations.

  • Close-to-Open Cache Consistency – This ZebClient function prevents two or more nodes from simultaneously modifying a file, thereby ensuring the avoidance of cache write conflicts.

  • Eviction - ZebClient provides a robust and intelligent eviction mechanism, ensuring that data is pushed to the most cost-efficient storage, without compromising the performance.

  • License - Both online and offline licenses are available.

Last updated