New Record for Internet Data Transmission Speed Between Global Hemispheres

Internet data transmission
A group of scientists and engineers from Brazil and the US successfully transferred data between São Paulo and Miami at approximately 100 gigabits per second (image: FreeImages.com)

A group of researchers and engineers affiliated with institutions in Brazil and the United States have broken the previous record for speed of data transmission between the Southern and Northern hemispheres. The people in Brazil are with theSão Paulo Research and Analysis Center (SPRACE), São Paulo State University’s Scientific Computing Center (NCC-UNESP), and theAcademic Network of São Paulo (Rede ANSP), all supported by FAPESP. The people in the US are with America’s Path (AMPATH), a project developed by Florida International University and the California Institute of Technology (Caltech).

In their first experiment, they transferred data from NCC-UNESP’s data center in São Paulo to Miami in the US,  demonstrating strong stability for a period of 17 hours, at a rate of approximately 85 gigabits per second (Gbps), approximately 8,500 times the transmission capacity of a typical residential broadband service in Brazil, which is 10 megabits per second (Mbps) at most.

Shortly afterward, in another experiment performed in the reverse direction, they successfully transferred data for an hour from Miami to SPRACE (whose systems are installed at NCC-UNESP), at an average rate of 96.56 Gbps, with a peak of 97.56 Gbps but always above 95.86 Gbps, equivalent to nearly 10,000 times the transmission capacity of a typical residential broadband service in Brazil.

The new record was set during the Supercomputing 2016 conference (SC16), one of the world’s largest and most prestigious events on high-performance computing, networking, storage and analysis, held at Salt Lake City (USA) from November 13 to 18, 2016.

“This was the third time we set a record for data transmission between the Southern and Northern hemispheres at this conference in collaboration with Caltech,” said Rogério Iope, executive manager of NCC-UNESP and coordinator of the demonstrations, in an interview with Agência FAPESP.

“In 2004, when SPRACE began operating and took part for the first time in demonstrations of international connectivity at Supercomputing, we achieved a record of 1 Gbps using the technology then available,” Iope said. “In 2009, one month after NCC-UNESP’s data center opened, we achieved 8 Gbps for both outbound and inbound data, demonstrating the facility’s capacity to handle data traffic.”

Continuous improvement

According to Iope, SPRACE has been able to set successive records for speed of data transmission between hemispheres thanks to the continuous improvement of its network infrastructure.

The computational analysis center is part of the Worldwide LHC Computing Grid (WLCG), a globally distributed computing infrastructure comprising more than 200 research institutions around the world. The WLCG operates like a single machine, processing and storing the vast amounts of data produced by experiments performed using the Large Hadron Collider (LHC) at CERN, the European Organization for Nuclear Research, in Switzerland.

When datasets are generated that serve as the basis for analyzing an aspect of an experiment running in the LHC, the system identifies the sites with currently idle computers and sends the datasets to them for processing.

The network infrastructure linking all of these computing centers is hierarchically classified into tiers. Tier 0 is CERN: all data pass through this central hub. Tier 1 comprises 11 sites, including major national research centers such as Fermilab and Brookhaven National Laboratory in the US. Tier 2 comprises over 140 sites, including research institutions such as UNESP. Iope explained that these sites must have sufficient bandwidth to receive and process the datasets, feeding the data to PC clusters in physics institutes around the world so that groups of scientists and individuals can analyze the LHC data from their own desks.

“The higher the bandwidth, the faster a dataset can be transferred to a computing center like SPRACE so that researchers can analyze the data and produce results potentially leading to new scientific discoveries,” he said.

According to Iope, each of the datasets circulating on the WLCG contains several terabytes of data. A terabyte (TB) is equivalent to almost 9 trillion bits.

For example, it takes about two and a half hours to transfer a 10 TB dataset to a computing center with a 10 Gbps channel, compared with less than 15 minutes to transfer the same dataset to a center such as SPRACE using a 100 Gbps channel.

“We’ve invested heavily in our network infrastructure to keep it always at the cutting edge, mainly because we’re integrated with CERN’s distributed computing grid. One of our obligations is to maintain a reliable high-speed network for data transmission so we can operate jointly with other computing centers,” said Sérgio Novaes, Scientific Director of NCC-UNESP and head of SPRACE.

“Whenever a new technology comes along, we try as hard as possible to make sure our machines can run at this new technological level. That’s what happened in 2016 with the unveiling of two 100 Gbps international channels between São Paulo and Miami, enabling us to set a new record for data transmission between the Southern and Northern hemispheres.”

Litmus test 

The two channels interconnect São Paulo and Miami at the network access point known as the NAP of the Americas, via submarine fiber optic cables under the Atlantic and Pacific Oceans, forming a ring around South America.

The channels function as redundant systems: if traffic via the channel that passes under the Atlantic fails, for example, data transmission can resume via the Pacific channel.

The Atlantic channel began operating experimentally in early 2016 and is run by Rede ANSP, which interconnects universities and research institutions in São Paulo State with academic networks in the US and other countries.

“The experiments performed by researchers at SPRACE and NCC-UNESP in November represented the final stage of certification and functioned as a litmus test for the Atlantic channel, which is operating at 100% nominally,” said Luiz Fernandez Lopez, general coordinator of Rede ANSP.

“These final tests improve the robustness for the future use of this link by researchers in areas such as high-energy physics, astronomy, bioinformatics and medicine, which currently require the highest data transmission capacity.”

The Pacific channel, which went live experimentally in June 2016, is run by Brazil’s academic Internet – the National Higher Education & Research Network (RNP) – and will interconnect universities and research institutions outside São Paulo State with academic networks abroad.

However, this channel is not yet operating because tests presented a 1% error rate in the data transmission between São Paulo and Santiago, Chile.

“Although it sounds negligible, a transmission error of that order of magnitude is terrible,” Lopez explained. “It causes loss of bandwidth and signal delay, as well as increasing the time taken to transmit a message.”

According to Lopez, the Pacific channel should be fully operational by March 2017.

Also, he added, most of the data traffic between universities and research institutions in São Paulo State and academic networks abroad is being routed via four 10 Gbps links between São Paulo and Miami that are operated by Rede ANSP.

“We didn’t deactivate these 10 Gbps channels when the 100 Gbps links were installed, so our data transmission capacity now totals 240 Gbps,” he said.

The demand for international data traffic from universities and research institutions in São Paulo State currently ranges from 5 Gbps to 10 Gbps depending on the day of the week and the time of day, he added, while national demand is approximately 40 Gbps.

For research in areas such as high-energy physics and astronomy, which requires data generated by scientific instruments located abroad, such as the LHC or radio telescopes in Chile, demand is far higher – and it is set to increase from 2018 as data transmission system testing begins for two new telescopes in Chile: the Large Synoptic Survey Telescope (LSST), which is being built by a US consortium, and the Thirty Meter Telescope (TMT), run by Caltech and the University of California.

“Together, these two new telescopes will require data transmission at 20 Gbps for testing and up to 80 Gbps by 2020,” Lopez said. “Anticipating this demand, we’ve already started preparing the infrastructure needed to provide transmission channels with this capacity.”