A storage area network (SAN) is a high-speed dedicated network that connects disk subsystems and tape storage drives to server systems. Access to data in a SAN is block-based, meaning that the accessing computer manages all related processes, regardless of the file system on the storage device. Storage Area Networks were introduced to improve application availability and performance by separating storage traffic from the rest of the LAN. SANs make it easier for organizations to allocate and manage storage resources for greater efficiency.
“Instead of isolated storage capacity across servers, you can share a pool of capacity across a range of different workloads and distribute it as needed. It’s easier to secure and manage,” said Scott Sinclair, senior analyst at Enterprise Strategy Group.
A storage area network consists of interconnected hosts, network equipment such as switches, and the storage devices. These components can be connected to each other via different protocols:
Leading Manufacturers in the Enterprise SAN Market:
These providers offer entry-level, mid-range, and high-end SAN switches for environments that require more capacity and performance.
the Gartner Markt Market Researchers define a Storage Area Network as follows: “A SAN consists of two levels: The first level – the storage wiring level – establishes connectivity between the nodes in a network and transports individual commands in a device-oriented manner and provides status information. There At least one storage node must be connected to this network. The second layer – the software layer – uses software to deliver services that run through the first layer.”
Both Storage Area Network and Network Attached Storage (NAS) are network-based storage solutions. Here are the main differences between SAN and NAS:
A SAN usually uses a Fiber Channel connection, while a NAS is often connected to the network via standard Ethernet.
A SAN stores data at the block level, while a NAS accesses data in the form of files.
For a client operating system, a SAN usually appears as a hard drive and exists as its own separate storage device network, while a NAS is seen as a file server.
A SAN works with structured workloads such as databases, while a NAS is designed more for unstructured data such as videos and images.
“Most companies have deployed some form of both NAS and SAN — often the decision depends on the workloads or applications involved,” Sinclair explains.
From combining SAN and NAS – or block and file storage – into one system uniform storage (Multi-Protocol Storage) came into existence. This allows a single system to support both Fiber Channel and iSCSI block storage, as well as file protocols such as NFS and SMB. Although several providers now have options in this area in their portfolio, the development of unified storage is attributed to the company NetApp.
Storage vendors regularly add new features to their SAN offerings to make their solutions more efficient and enable users to better scale and manage them. When it comes to performance, flash memory is one of the most important innovations in this area. There are both hybrid arrays, which combine traditional hard drives with flash drives, and storage area networks, which are based solely on flash storage. In the enterprise storage world, flash has been especially prevalent in SAN environments, as the structured data workloads in a SAN tend to be smaller and easier to migrate.
The development of SAN products is also heavily influenced by artificial intelligence: With AIOps functionalities Vendors are trying to integrate AI into their monitoring and support tools. It combines machine learning and analytics to help businesses
monitor system logs,
unite storage delivery
to avoid errors during peak loads and
optimize workload performance.
Gartner Market Researchers Count AIOps Features in Their Latest »Magic Quadrant for Primary Storage” one of the most important storage options when it comes to choosing a platform for structured data workloads: “AIOps can address business needs, for example when it comes to cost optimization and capacity management, proactive support, workload simulation, growth forecasting and/or asset management strategies.”
While converged arrays and devices have blurred the lines between SAN and NAS, hyper-converged infrastructures (Hyperconvergent Infrastructures, HCI) takes the consolidation of storage options one step further: HCI combines storage, compute and networking into one system to reduce data center complexity and increase scalability. Hyper-converged platforms typically run on commodity servers and include:
a hypervisor for virtualized computing,
Software-defined storage and
HCI can contain any type of storage – block, object and file storage can be combined in one platform. Multiple nodes can be clustered together to create shared capacity storage pools. This is very popular with businesses, especially since many modern applications rely on file and object storage and the amount of unstructured data is growing much faster than structured data. Hyper-converged infrastructures cannot completely replace SAN implementations in all cases – companies must decide based on their specific requirements.
Another trend influencing the evolution of traditional SAN storage is the shift towards consumption-based IT: pay-as-you-go hardware models are designed to provide cloud-like pricing structures for on-premises infrastructure. The hardware is delivered on site and is mainly managed and managed by the manufacturers. Customers lease these systems through a variable monthly subscription that is billed based on hardware usage.
This is well received by companies looking for alternatives to buying IT equipment directly: Study by IDC 61 percent of companies surveyed plan to move to a pay-per-use model when it comes to infrastructure. Market researchers estimate that by 2024, half of the global data center infrastructure will be used as a service. Gartner Analysts contract, that by 2025, more than 70 percent of enterprise storage capacity will be delivered on pay-per-use offerings. That would be a significant increase from the 40 percent in 2021.
- Alexander Best, Datacore
“The advantage of SDS is mainly in its diversity: it can be used in different architectures, ensures technological independence and a faster time-to-market. But given the hardware in use now, storage will remain an issue for the next 20 years.”
- Andreas Schmidt, Dropbox Business
“As a cloud provider, we are not talking about disks or tapes, but about availability and easy data management. Especially now that in the new lockdown it is clear what it is all about for the user: being able to access his own data from any location and work with it productively.”
- Stefan Roth, Fujitsu
“It is fundamental to analyze, combine and make the data flows in a company compatible with each other. In Enterprise SDS solutions it is relatively easy to map and analyze complete data workloads. (Example ETERNUS DSP or Qumulo)”
- Kristian Bacic, Huawei
“It takes more than a central dashboard to monitor the storage of the future. Sooner or later, AI systems will be at the heart of a wide range of automation and load balancing.”
- Goetz Mensel, IBM
“With the acquisition of Red Hat, a near-perfect ecosystem was added to our portfolio. Above all, our job was to integrate this perfectly into our portfolio: first, that means implementing the backup of persistent data in the container area. second, to be able to back up these containers, and third, to realize the necessary automation.”
- Johannes Wagmuller, NetApp
“We see the growing desire from customers to use storage as a service and move positive scenarios from the public cloud to an on-premises infrastructure.”
- Markus Grau, Pure Storage
“Customers want to consume data and not manage their storage. The key question is: how do I get the necessary innovation for my data without having to renew the entire system? Also, it is important to use a degree of technological frugality. Because the more specialized systems I use in parallel, the greater the risk of new silos being created.”
- Marco Arnhold, technical data
“As a distributor, we are always a bit between two seats. With the technological diversity that prevails in the market today, it is important to have a wide network of specialists. This way we can combine classic storage elements with topics such as hybrid cloud, Link containerization and Co. in customer projects.”
- Manfred Berger, Western Digital
“The topic of “Network Attached Storage” (NAS) often falls short in the discussion. With integrated drives, I’m usually limited in terms of space and speed. However, if I want to store and analyze very large amounts of data “I really have to thinking that network. The key word here is to break down resources and make them available in slices. NAS, or “NVMe over Fabric”, provides valuable solutions, especially when it comes to artificial intelligence and other data-intensive processes.”
This post is based on an article from our US sister publication Network World.