Recently I had several clients ask me to assist them in the review of the current options available for Enterprise Network Attached Storage (NAS) Media Production Platforms, and through my discussions I realized not everyone understands what exactly Network Attached Storage entails. It was during this process I spent a fair amount of time talking with senior management teams and product management/development teams discussing the philosophy and technology of network attached storage.
With the recent global conditions over the past year, there has been an ever-increasing push to move everything to “cloud” resources. However, when it comes to editing full resolution material (UHD, 8K, RAW, etc.), cloud options tend to be fewer and further between and expensive. There is still a huge need for cost effective on-premise high performance media editing storage.
Network Attached Storage (NAS) is as its name suggests: storage which is shared and accessed via one’s ethernet-based local area network (LAN). NAS is fundamentally a server with a large amount of storage incorporated into its design. The CPU is utilized to present and manage the storage across a network and user access. The more users/storage, the more CPU/network resources typically are required.
How NAS Came to Be
Prior to the advent of NAS, most shared storage consisted of “Storage Area Network“ or SAN. SAN was presented to individual computers as direct attached storage via a dedicated private network utilizing Fiber Channel. SAN solutions were expensive and complicated to manage and mostly reserved for larger organizations with dedicated technology staff. The benefit to the media industry is that they could provide upwards of 8 Gbps of bandwidth access to the storage, which was magnitudes of order faster than the standard 1G networks of the day. Today, with 10Gbps local area networks being fairly standard and 25/50G systems becoming more ubiquitous, Fiber Channel solutions have declined in popularity.
The first Network Attached Storage solutions were developed in the early 1980s by familiar names such as IBM, Sun Microsystems, 3COM, Novell and Microsoft. By the mid 2000s NAS vendors began to develop solutions focused on small business solutions and specialized solutions focusing on the media industry. As of 2021, there are now multiple vendors providing network attached storage solutions focused on media industry, providing support for multiple simultaneous UHD and 8K workflows. For example, DigitalGlue’s creative.space storage platform offers several solutions which can support from a handful of simultaneous users to over 100 concurrent edit stations producing UHD and 8K content.
Basic Storage and Networking Definitions for Beginners
Now that we have the history out of the way, let’s get a few items defined which will help those who may not be as familiar with storage and networking. We can start simple with a bit, which is the smallest and most basic unit of digital information. Typically, you will see things measured in megabits (1000 bits) per second, or Mbps (i.e. video bit rate), or gigabits (1,000,000 bits) per second, or Gbps (i.e. network transport rate).
The next term that is important is a byte, which is a unit of digital information that consists of eight (8) bits. I know this may be very basic to those of us who work in the digital world on a daily basis, but it’s an important concept to understand when comparing metrics of bandwidth, and even us “professionals” can get them confused.
Bandwidth Measurements
There are two (2) types of “bandwidth” measurements that are important to NAS solutions and each are typically presented in a different metric. It is this “difference” where many people incorrectly identify the “speed” of the system.
The first item is aggregate disk bandwidth (throughput), or how much data can the disk read or write over a period of time. This is typically measured in Megabytes per second (MB/s) or Gigabytes per second (GB/s). This value is dependent on what type of disk (SATA, SAS, Spinning HD, SSD), how many of them are being used in and what type of RAID configuration. The default RAID implementation for the media world is RAID 6 with two parity disks. This allows for up to two drives to fail in a RAID before you lose capability.
The second important bandwidth measurement is that of the network interface. The standard interface on computers today is one (1) Gigabit per second (Gbps). Over the past few years, 10 Gbps interfaces have become more common to the desktop as well. Core switches are now coming equipped with 25/50 Gbps and even 100 Gbps as standard offerings. In most cases a Link Aggregated pair of 10G network interfaces or a single 40G NIC is more than enough to support a church’s needs and are relatively standard interfaces found on most core network switch infrastructures.
Putting it All Together
Now we have to equate the two measurements to make sense of everything. As you might remember from computer class, there are eight (8) bits in a byte. So a one (1) Gbps ethernet connection can consume up to a maximum of 0.125 GB/s or 125 MB/s of disk bandwidth. A ten (10) Gbps ethernet connection can consume up to a maximum of 1.25 GB/s or disk bandwidth.
The following chart will provide you with an equivalency between bit and bytes when it comes to measuring network vs disk bandwidth:
Interpreting the Data
So what does all this mean? It means that if your NAS has 1 GB/s of disk bandwidth, you could support up to 8 simultaneous edit stations with 1Gbps ethernet connections (8 stations x .125 GB/s = 1 GB/s). However, if your edit stations have 10Gbps connections, your NAS would only support a single edit station at a time if it were using the entire network bandwidth (i.e. editing more than one stream of RED 8K R3D 12:1).
One of the fundamental drivers in selecting a media-centric NAS is how many simultaneous streams of video it can handle for a given CODEC. In other words, how many editors can I have attached to the storage and be able to get their jobs done? The following chart helps to illustrate the number of streams supported on each connection type for the given CODEC and number of theoretical streams based on disk throughput:
NAS Options Available
In the market today there are two basic types of NAS solutions: one based on traditional hardware RAIDs and the other based on a software-defined model utilizing OpenZFS. Hardware-based storage systems have been the primary solutions for the past decade or more. They utilize a RAID controller, which is a card or chip that sits between the operating system and the storage drives. RAID controllers work to virtualize the drives into a distinct group (volume) with specific protections and redundancy and present them to the operating system. The operating system then manages the file system on the volume.
In software-defined storage solution utilizing OpenZFS, the file system and volume management is provided by ZFS itself. ZFS is not an operating system; in fact it relies on the Linux OS (as well as being ported now to many other OS’s). It was designed as a next generation file system, utilizes storage pooling, automatic repair and provides functionality similar to hardware RAIDs which ZFS refers to RAID-Z. However, this should not be confused with “software” RAID, which uses software to mimic a hardware RAID model for volume management. RAID-Z can provide one or two sets of parity data like RAID5/6 but also offers the ability for a third set of parity data in RAID-Z3.
Final Thoughts
Network attached storage can help an organization manage media by centralizing it and making it accessible to individuals in the organization based on access permissions without requiring the management of multiple individual drives, sometimes euphemistically referred to as a “Sneaker Network.”
I have been in the “storage” world for nearly 2 decades, both as a purchaser/user as well as a vendor of high-end broadcast storage for playout and ingest and would say my views on hardware- vs software-defined solutions have definitely shifted over the past 5 years. In the past I would have only recommended a hardware-based solution as software-defined storage just didn’t have the performance required.
However, software-defined solutions based on ZFS are now equal to and in many cases surpassing the performance of hardware-based RAID solutions. A lot of this is due to Moore’s law and the advances in compute and memory technologies, but also the various optimizations in the OpenZFS platform. In addition, ZFS was built from the ground up for the protection of data, and through the use of its Copy on Write and Snapshotting technology as well as providing for 3 stripes of parity, I can say without a doubt your media will be very safe on a ZFS based solution.
My Pick for an OpenZFS NAS
DigitalGlue has developed their own solution, creative.space, that provides the best of multiple elements typically utilized in making decisions on storage. They provide industry-leading performance and data integrity via their highly optimized version of OpenZFS. In addition, they provide a scalable product offering to ensure a solution that fits any organization. And finally, its unique approach of providing on-premise managed storage with an OPEX pricing model like many cloud providers all but eliminates the requirement for a capital expenditure request. Please feel free to visit https://www.creative.space for more information.
DigitalGlue will be at booth 421 during the Church Facilities Conference & Expo, so be sure to stop by!
Philip Grossman is a 20-plus-year media industry veteran and is currently the Principal at PGP LLC and Solution Architecture Group, where he provides advisory services to leading global media companies. Prior to his current roles, he was the Vice President of Solutions Architecture and Engineering at Imagine Communications and Senior Director of Media Technologies and Strategy at The Weather Channel. He was the host and producer for the Discovery Channel’s “Mysteries of the Abandoned: Chernobyl’s Deadly Secrets.” Grossman is also a manager for the Atlanta Chapter of the Society of Motion Picture and Television Engineers.