- Data Center Digest
- Posts
- ✏️ Pixar finds out transformative art calls for transformative data
✏️ Pixar finds out transformative art calls for transformative data
Plus Google Cloud drops the guantlet and JLL reports huge 2nd European qtr
Good Friday morning, and welcome to Data Center Digest.
We’re looking at data centers and the people, technologies, and trends that make them run.
Today’s Newsletter:
✏️Pixar finds out transformative art calls for transformative data
⛅ Google Cloud drops the AI guantlet
Big Deals: More Google in Ohio and Colorado, plus INVIDIA projects $1 trillion in data centers
Resources: Data Center Hawk’s Youtube channel is 🔥
Est. read time: 6mins, 45secs
⚠️ First, a favor: After reading, please reply directly to this email and tell us how we’re doing! This allows us to reach your inbox and really helps us out.
- News -
Transformative art requires transformative computing
Still from Pixar’s new film, “Elemental” Inverse.com
As Pixar’s illustration and animation technology evolves, so does its data needs. Pixar turned to VAST Data to take their latest film, Elemental, to the next level.
New tech means more data
Pixar used a volumetric animation and rendering method to create the most immersive animated film ever made, which required a new way to scale detailed characters and environments.
Elemental’s volumetric technique created six times the data footprint and computational demands for data as their previous film, Soul. Recent Pixar films have used a geometric method of creating and delivering graphics.
Pixar tapped VAST Data Solutions to manage the new data needs of their revolutionary technology.
VAST was able to transfer a massive 7.3 Petabytes of data to a unified datastore cluster, which enabled constant activity within Pixar's render farm. This cluster, overseen through a high-performance namespace, efficiently arranged the data hierarchy from cost-effective flash storage to ensure instantaneous data retrieval.
This kept Pixar’s render farm constantly busy while improving observability and analytics.
Key features of VASTs new architecture:
Optimized data access: Compute requirements for Elemental demanded fast, concurrent data access from thousands of processors in the render pipeline. Elemental required nearly 2 petabytes of capacity simultaneously (4-5x more than previous films.)
Operational resistance: With multiple projects at different stages in production at any one time, Pixar has a vast volumetric demand for data. VAST enabled the movie company to enjoy the uptime needed to meet its intense production schedules.
Tons of data-rich images: VAST’s data platform allowed Pixar to render nearly 150,000 volumetric frames in Elemental alone.
“We have a saying at Pixar – art pushes technology. The data centers we have on campus are the core pieces of infrastructure that enable us to do this. Periodically, we need to reconfigure and upgrade that infrastructure to accommodate the latest computer technologies. Our data center team is tasked with maintaining the ‘state-of-the-art’ to enable the ‘art.’” – Eric Bermender, Head of data center and IT Infrastructure, Pixar
Google Cloud drops the gauntlet with new AI release
Announcement of the A3 supercomputer. geeky-gadgets.com
Google Cloud has announced significant advancements in its AI-optimized infrastructure based on the newly released NVIDIA H100 GPUs.
As we’ve discussed at length, hyperscale cloud providers have to re-architecture and reconstruct many of their existing computing systems to keep up with the surging demands of AI.
A3 supercomputer
With their latest AI offering, GCloud combines NVIDIA’s new $30,000 H100 GPUs with their custom-designed IPUs to create A3 VM. Which reportedly achieves 3x faster training times and 10x greater networking bandwidth than its predecessor.
“A3 is really purpose-built to train, tune, and serve incredibly demanding and scalable generative AI workloads and large language models,” according to Mark Lohmeyer, the VP for computer and ML infrastructure at Google Cloud.
TPUs
Headlining the announcement, though, is Google’s new 5th generation TPU (a TPU is an AI accelerator application-specific circuit developed by Google in 2016. Think high-volume, low precision.)
The v5e TPU is built specifically for medium to large-scale AI training. Distinguished by its cost-efficiency and scalability, the Cloud’s latest TPU delivers 2.5x training performance per dollar for LLMs and generative AI models compared to v4.
The new circuit supports up to 256 interconnected chips and features eight distinct virtual machine configurations. All of this is to accommodate a large array of LLM and AI model sizes.
With the showcase of these new computing tools, Google plans to stay front and center in the AI arms race.
JLL reports Europe’s 2nd biggest data center quarter ever
Rendering of Interxion’s new Paris data center. DCDynamics.
Global real estate firm and data center juggernaut JLL says that Europe just experienced its second-largest data center quarter in history, and it’s due to AI.
The period from April to June 2023 saw 114MW of take-up across Europe’s leading markets of Frankfurt, London, Amsterdam, Paris, and Dublin (FLAP-D), more than double the 51MW recorded in Q1 - and the most ever on record for a second quarter.
“The gold rush of AI continues to drive data center growth even further and is opening an exciting new chapter for our industry," Tom Glover, head of EMEA data center transactions at JLL, said.
Daniel Thorpe, EMEA data center research lead at JLL, added: “The AI era is here and there is no going back. What we’re seeing play out in the market is that data centers are gearing up to better support increased power and performance requirements. The second half of 2023 will see continued momentum as new data center supply comes to the market.”
Growth in key European markets looks like this:
Frankfurt added 44MW
Paris added 24MW
Dublin added 12MW
London (Europe’s largest dc market) added 7MW
There was also heightened activity for pre-lettings, with 141MW committed in Q2.
- Big Deals -
NVIDIA projects $1 trillion in spending on data centers
Nvidia's CEO just put a price tag on the data center industry over the next four years.
Late on Wednesday, Jensen Huang predicted that $1 trillion would be spent in four years on upgrading data centers for AI (GPUs being a big part of that).
As of June 30, Amazon, Microsoft, Google, and Meta had about $334 billion in cash and cash equivalents. Amazon and Meta look the most exposed here: Amazon has $41 billion in cash, and Meta has $64 billion.
The total AI data center bill is $250 billion in the next year and then $750 billion over the following three years, according to Nvidia.
- Big Deals -
Google to spend $1.7 billion in Ohio
Google Ohio data center under construction. constructionreviewonline.com
The cash will be used to complete the Columbus and Lancaster data centers currently under development and expand an existing facility in New Albany.
Current and announced data center investments in Central Ohio total more than $4 billion.
- Big Deals -
More Google, this time Fiber in CO
Google Fiber map. fiber.google.com
Google Fiber will add another Colorado city to its growing footprint.
The agreement with Wheat Ridge, CO will allow GFiber to deliver high-speed internet to businesses and some 20,000 households within the Denver suburb’s city limit.
“The buildout is 100% funded by Google Fiber, will begin in late 2024 and bring service to its first customer sometime in 2025.” Says Google Fiber General Manager Sasha Petrovic
Resources
1. Data Cener Hawk is an awesome resource. But more specifically, their youtube channel is a wealth of knowledge. From interviews with key players, to industry trends, if you’re in the industry, you’d get smarter watching their stuff. Check it out here.
Thanks a lot for reading! Please let us know how we’re doing by clicking below and replying to this email.