Google Cloud developer uses AI to calculate 100 trillion digits of pi
Long history of pi gets a new entry thanks to Google Cloud
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
The long and chequered lifespan of mathematical icon pi has been extended even further thanks toGoogleCloud.
Google Developer Advocate, Emma Haruka Iwao, successfully calculated pi to 100 trillion digits using the company’s cloud platform.
What’s even more striking it that this is the second time in just three years that Iwao has broken the record.
Why does this matter?
Mathematicians have been cracking away at calculating pi to its limits since Ancient Egypt, Greece, and Babylon.
Google openly admitted that you might not need to “calculate trillions of decimals of pi” but said the “massive calculation demonstrates how Google Cloud’s flexible infrastructure lets teams around the world push the boundaries of scientific experimentation".
Though pi related calculations pop up in everything from the theory of relativity to engineering problems and GPS mapping, these types of extreme calculations are generally used as a benchmarking tool by computer scientists, to prove and assess the power of their hardware.
How did they do it?
Google Cloud says it used it’s generally availableCompute Engineservice to make the record calculation.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The tech giant attributed its improved result compared to last time it made the attempt in 2019 to improved networking and storage.
The project was able to achieve 100 Gbps egress bandwidth, a huge improvement on the 16 Gbps of egress available when they did the 31.4-trillion-digit calculation in 2019.
The project used new network driverGoogle Virtual NIC (gVNIC), which is integrated with Google’s Andromeda virtual network stack
Google also attributed the success of the project in large part to improved storage, saying that as the “dataset doesn’t fit into main memory, the speed of the storage system was the bottleneck of the calculation”.
For this job they decided to use Balanced Persistent Disc, a new type of persistent disk which Google said offers up to 1,200 MB/s read and write throughout and 15-80k IOPS.
Google Cloud is launching a Web3 team>Google Cloud taps AMD to bring confidential computing to VMs>Google Cloud launches its own version of PostgreSQL
Those who are interested in checking out more about the nitty gritty of the project can head toGitHubto find the code Google used.
Google will also be hosting a live webinar on June 15 to share more about the experimentation process and results, and you can headhereto join.
Will McCurdy has been writing about technology for over five years. He has a wide range of specialities including cybersecurity, fintech, cryptocurrencies, blockchain, cloud computing, payments, artificial intelligence, retail technology, and venture capital investment. He has previously written for AltFi, FStech, Retail Systems, and National Technology News and is an experienced podcast and webinar host, as well as an avid long-form feature writer.
Cisco issues patch to fix serious flaw allowing possible industrial systems takeover
7 myths about email security everyone should stop believing
Another reason to avoid edge-lit 4K TVs: they may fail faster than others, according to this report