Is your network optimized for large file transfers?

Disk storage used to be an expensive commodity. One of the first commercial hard disk drives shipped with the IBM RAMAC 305 system in 1956, in the form of two 24-inch platters housed inside a cabinet bigger than a home refrigerator. Its capacity was a mere 5 MB – enough to hold a modern MP3 or two ripped at 320kbps – and the cost per MB was a staggering $10,000.

Over the following decades, the price of HDDs steadily declined from that level, but many devices were still tightly constrained in terms of both their onboard storage and ability to easily work with large files of any kind. External media such as floppy disks, CD-ROMs and USB thumb drives were, and remain, crucial for transporting photos and other bulky assets between PCs.

Large file transfer.

Large file transfers can bog down an unprepared WAN.

The rise of the commercial Internet reshaped such data transfers, with increasingly high-bandwidth IP networks providing the new muscle to move information anywhere from the public cloud to a machine in a branch office. While this modernization of corporate WANs replaced many older workflows directly involving physical storage media, it also created fresh challenges in ensuring WAN reliability and security, especially when processing big files.

The large files that today’s WANs have to deal with
Video and software-as-a-service solutions are two common examples of applications that have been both enabled by and are problematic for modern WANs. In the case of SaaS, its uptake has shifted a lot of simple, cost-effective LAN activity over to valuable WAN links, while also making it easier than ever to transfer large files externally and bypass previously effective firewall policies.

Moreover, the roundtrip file exchanges made by cloud-based services can magnify the impact on WAN performance. Techniques such as WAN optimization are sometimes used to enhance the network’s ability to handle the large files associated with these applications, but WAN Op alone is often not enough to ensure consistent WAN performance. More specifically, MPLS links may not have enough bandwidth, IPSec VPNs may be unreliable and WAN Op may not be sufficient for processing highly compressed video files.

“SaaS has shifted low-cost LAN activity over to valuable WAN links.”

The challenge, then, is finding a way to effectively set up and manage different types of links, from MPLS to broadband Internet, so that parallel networks in the WAN can properly support today’s cutting-edge applications. Increasing reliance on the cloud has made traditional WAN architectures less efficient in preserving link quality and handling routine large file transactions from these programs.

Accordingly, adding broadband Internet links as well as software-defined decision-making has become an appealing option for supplying bandwidth and sending mission-critical traffic over the best paths possible. Liabilities such as over-reliance on low-bandwidth MPLS and single points of failure can be mitigated through such aggregation and network intelligence, leading to more efficient workflows with large files.

“[Network accelerator and optimization] solutions only seemed to help with application delivery and did not offer what we needed for large file transfers, file mirroring, syncing and backups,” said John Wunder, director of IT for Magnum Semiconductor. “Talari’s Adaptive Private Networking gives me the ability to leverage the Internet with more effectiveness. APN aggregates my MPLS and Internet links, expanding the bandwidth capabilities at each of my sites.”

Categories: Enhance WAN Optimization