The U.S. Postal Service (USPS) delivers over 155 billion pieces of mail across the continent and its territories each year—we’re talking a lot of stamps. As inexpensive and convenient as it is to print your own postage, people try things like re-printing the same stamp several times, committing fraud and costing the service significant revenue.
To solve this, the USPS needed the ability to scan a stamp, check whether it’s been used or not, and accept or reject it—in an instant—anywhere from Maine to Guam. For that, it needed a supercomputer.
“For years we have used midrange and high-performance computing resources as well as mainframes to help move and track U.S. mail across the globe,” says Scot Atkins, IT program manager at the USPS. “Resources such as these can also be helpful in identifying delivery ‘exceptions,’ including mail that might have insufficient postage, fraudulent duplication of postage or security-related issues.
“Unfortunately, the midrange servers we were using several years ago couldn’t keep up. They required 36 hours for processing each 24-hour batch workload. We decided to take a new approach that would avoid batch processing and move to more real-time streaming and complex event processing.”
USPS started working with SGI and FedCentric Technologies in 2007 on a new supercomputer for processing mail. In 2010, the organizations implemented an Oracle TimesTen In-Memory Database that runs on Linux and is powered by an Intel Xeon processor-based SGI UV 1000 system that includes 1,024 cores and 16TB of shared memory. Three years later, in 2013, the USPS began to expand its infrastructure with an SGI UV 2000 system, which uses Xeon processor models E5-4640 and E5-4603, that has a combined total of 4,096 cores and 32TB of memory.
Passive adaptive scanning systems (PASS) located in thousands of postal stations across the United States (and even some outside on the country) are connected to USPS’s centrally located supercomputer, which analyzes new data for sorting and routing purposes and compares it with billions of existing records to check for fraud.
“To maximize the benefit of those units, our goal was to achieve a 300-millisecond round trip or less for data to and from the supercomputer. Because most of that travel time is consumed by sending data over the Internet, we needed an incredibly fast response time from the supercomputer,” explains Atkins.
Atkins says the current infrastructure has an average round-trip time of between 50 to 100 milliseconds within the continental United States. A more remote location such as Guam can complete the trip in 225 milliseconds on average. “With that performance, we can provide near-real-time responses for 15,000 PASS devices and manage 10 million packages per hour at our peak.”
The benefits to customers from a more connected and efficient USPS are not only in helping keep postal rates stable, but also sorting and routing algorithms that make deliveries on Sundays possible.
The USPS signed an exclusive deal with Amazon in late 2013 to offer Sunday package delivery, something not offered by other competing package services.
“The supercomputer can determine the correct carrier route if a package is to be delivered Monday through Saturday. And it can also provide dynamic routing for a reduced complement of carriers on Sunday,” Atkin says. “The move to electronic postage, growing demand for analysis and tracking, geospatial analysis, and new reporting requirements mean that we process a tremendous amount of data every day.”
With the amount of big data analytics going into routing post across thousands of miles, it might be time we stop calling it “snail mail.”
Top image credit: JD Hancock
This content was originally published on the Intel Free Press website.