Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content.
You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Managing your cookies

Our website uses cookies. You have full control over what you want to activate. You can accept the cookies by clicking on the “Accept all cookies” button or customize your choices by selecting the cookies you want to activate. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button.

Necessary cookies

These are essential for the user navigation and allow to give access to certain functionalities such as secured zones accesses. Without these cookies, it won’t be possible to provide the service.
Matomo on premise

Marketing cookies

These cookies are used to deliver advertisements more relevant for you, limit the number of times you see an advertisement; help measure the effectiveness of the advertising campaign; and understand people’s behavior after they view an advertisement.
Adobe Privacy policy | Marketo Privacy Policy | MRP Privacy Policy | AccountInsight Privacy Policy | Triblio Privacy Policy

Social media cookies

These cookies are used to measure the effectiveness of social media campaigns.
LinkedIn Policy

Our website uses cookies to give you the most optimal experience online by: measuring our audience, understanding how our webpages are viewed and improving consequently the way our website works, providing you with relevant and personalized marketing content. You can also decline all non-necessary cookies by clicking on the “Decline all cookies” button. Please find more information on our use of cookies and how to withdraw at any time your consent on our privacy policy.

Skip to main content

Meet the Next Generation of AI Solutions

Make New Breakthroughs at Scale

Get Ahead of the Curve with Bow Pod

BOWpod256 IPU

Machine intelligence is being used today to solve many of the world’s most complex business and societal challenges –from discovering life-saving medicines to predicting stock market trends. But achieving consistently competitive AI performance at scale remains a significant challenge in the datacenter, especially for new and large models.

Graphcore’s new generation of Bow Pod systems are built from the ground up to accelerate AI performance from experimentation through to production.

Based on the Bow IPU, the first processor in the world to use Wafer-on-Wafer 3D stacking technology, Graphcore’s ground-breaking Bow Pods deliver a 40% leap forward in performance and up to 16% more power efficiency than previous generation systems. This allows AI practitioners to not only run today’s machine learning workloads faster but to more easily explore and create new types of models.

Explore, Build, and Grow with IPU-POD™>

Graphcore’s second-generation IPU-POD systems are designed to accelerate large and demanding machine learning models for flexible and efficient scale out, enabling AI-first organizations to:

  • unleash world-leading performance for state-of-the-art models
  • push AI application efficiency to its maximum
  • and optimize total cost of ownership.

graphcore MLPERF v1.1In terms of performance, Graphcore’s most recent MLPerf Training Benchmark submission demonstrates two things very clearly – IPU-POD systems are getting larger and more efficient, and Poplar’s software maturity means they are also getting faster and easier to use.

Software optimization continues to deliver significant performance gains, with ResNet-50 training on the IPU-POD16 system completed in only 28.3 minutes and just 3.79 minutes on the IPU-POD256.

Access Graphcore's latest performance results

Flexible Compute Customized for your Deployment Needs

Machine intelligence workloads have very different compute demands so flexibility in production deployment is critical. Optimizing the ratio of AI to host compute can help to maximize performance, while improving total cost of ownership.

Graphcore Bow Pod systems are disaggregated to enable customized compute, allowing flexible mapping of the number of servers and switches to the requisite number of IPU platforms, ensuring deployment is better tailored to production AI workloads.

Graphcore and Atos

Built for Straightforward Deployment and Development

Ease of deployment has been a paramount consideration in designing Graphcore systems. The result is a solution that supports standard hardware and software interfaces and protocols and integrates effectively with existing data center infrastructures.

Graphcore’s Poplar software stack supports industry open standards and frameworks, and much of it is opensource.

Poplar Graphcore framework

For deeper control and maximum performance, the Poplar framework enables direct IPU programming in Python and C++. Poplar allows effortless scaling of models across many IPUs without adding development complexity, so developers can focus on the accuracy and performance of their application.

Visit the Graphcore Developer Portal

Nigel Toon, Graphcore’s CEO, explains how Atos and Graphcore are jointly tackling the AI Challenge