| Page 675 | Kisaco Research

This session will cover a quick overview of CXL technology, its influence on systems architecture and explore potential use cases within enterprise applications. Ping Zhou will then discuss evaluations of CXL technologies from ByteDance’s perspective. Lastly, Ping will cover ByteDance’s vision of next generation systems/architecture and the technical challenges ahead for the industry.

Author:

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

Ping Zhou

Researcher/Architect
Bytedance Ltd.

Ping Zhou is a Senior Researcher/Architect with ByteDance, focusing on next-gen infrastructure innovations with hardware/software co-design. Prior to joining ByteDance, Ping worked with Google, Alibaba and Intel on products including Google Assistant, Optane SSD and Open Channel SSD. Ping earned his PhD in Computer Engineering at University of Pittsburgh, specializing in the field of emerging memory and storage technologies.

There are a set of challenges that emanate from memory issues in GenAI deployments in enterprise

  • Poor tooling for performance issues related from GPU and memory interconnectedness
  • Latency issues as a result of data movement and poor memory capacity planning
  • Failing AI training scenarios in low memory constraints

There is both opacity and immature tooling to manage a foundational infrastructure for GenAI deployment, memory. This is experienced by AI teams who need to double-click on the infrastructure and improve on these foundations to deploy AI at scale.

 

Author:

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

Rodrigo Madanes

Global AI Innovation Officer
EY

Rodrigo Madanes is EY’s Global Innovation AI Leader. Rodrigo has a computer science degree from MIT and a PhD from UC Berkeley. Some testament to his technical expertise includes 3 patents and having created novel AI products at both the MIT Media Lab as well as Apple’s Advanced Technologies Group.

Prior to EY, Rodrigo ran the European business incubator at eBay which launched new ventures including eBay Hire. At Skype, he was the C-suite executive leading product design globally during its hyper-growth phase, where the team scaled the userbase, revenue, and profits 100% YoY for 3 consecutive years.

As Machine Learning continues to forge its way into diverse industries and applications, optimizing computational resources, particularly memory, has become a critical aspect of effective model deployment. This session, "Memory Optimizations for Machine Learning," aims to offer an exhaustive look into the specific memory requirements in Machine Learning tasks and the cutting-edge strategies to minimize memory consumption efficiently.
We'll begin by demystifying the memory footprint of typical Machine Learning data structures and algorithms, elucidating the nuances of memory allocation and deallocation during model training phases. The talk will then focus on memory-saving techniques such as data quantization, model pruning, and efficient mini-batch selection. These techniques offer the advantage of conserving memory resources without significant degradation in model performance.
Additional insights into how memory usage can be optimized across various hardware setups, from CPUs and GPUs to custom ML accelerators, will also be presented. 

Author:

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Tejas Chopra

Senior Engineer of Software
Netflix

Tejas Chopra is a Sr. Engineer at Netflix working on Machine Learning Platform for Netflix Studios and a Founder at GoEB1 which is the world’s first and only thought leadership platform for immigrants.Tejas is a recipient of the prestigious EB1A (Einstein) visa in US. Tejas is a Tech 40 under 40 Award winner, a TEDx speaker, a Senior IEEE Member, an ACM member, and has spoken at conferences and panels on Cloud Computing, Blockchain, Software Development and Engineering Leadership.Tejas has been awarded the ‘International Achievers Award, 2023’ by the Indian Achievers’ Forum. He is an Adjunct Professor for Software Development at University of Advancing Technology, Arizona, an Angel investor and a Startup Advisor to startups like Nillion. He is also a member of the Advisory Board for Flash Memory Summit.Tejas’ experience has been in companies like Box, Apple, Samsung, Cadence, and Datrium. Tejas holds a Masters Degree in ECE from Carnegie Mellon University, Pittsburgh.

Systems Infrastructure/Architecture
Hyperscaler
Data & Workloads

Author:

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

Zaid Kahn

VP, Cloud AI & Advanced Systems Engineering
Microsoft

Zaid is currently a VP in Microsoft’s Silicon, Cloud Hardware, and Infrastructure Engineering organization where he leads systems engineering and hardware development for Azure including AI systems and infrastructure. Zaid is part of the technical leadership team across Microsoft that sets AI hardware strategy for training and inference. Zaid's teams are also responsible for software and hardware engineering efforts developing specialized compute systems, FPGA network products and ASIC hardware accelerators.

 

Prior to Microsoft Zaid was head of infrastructure at LinkedIn where he was responsible for all aspects of architecture and engineering for Datacenters, Networking, Compute, Storage and Hardware. Zaid also led several software development teams focusing on building and managing infrastructure as code. This included zero touch provisioning, software-defined networking, network operating systems (SONiC, OpenSwitch), self-healing networks, backbone controller, software defined storage and distributed host-based firewalls. The network teams Zaid led built the global network for LinkedIn, including POP's, peering for edge services, IPv6 implementation, DWDM infrastructure and datacenter network fabric. The hardware and datacenter engineering teams Zaid led were responsible for water cooling to the racks, optical fiber infrastructure and open hardware development which was contributed to the Open Compute Project Foundation (OCP).

 

Zaid holds several patents in networking and is a sought-after keynote speaker at top tier conferences and events. Zaid is currently the chairperson for the OCP Foundation Board. He is also currently on the EECS External Advisory Board (EAB) at UC Berkeley and a board member of Internet Ecosystem Innovation Committee (IEIC), a global internet think tank promoting internet diversity. Zaid has a Bachelor of Science in Computer Science and Physics from the University of the South Pacific.

 

Dr Yap Ghim Eng

Chapter Lead, Data Privacy Protection Capability Centre
GovTech

Dr Yap Ghim Eng

Chapter Lead, Data Privacy Protection Capability Centre
GovTech

Dr Yap Ghim Eng

Chapter Lead, Data Privacy Protection Capability Centre
GovTech

To successfully roll out a new system in an organization it is essential to ensure its effectiveness and obtain stakeholder buy-in. To get the internal buy-in and ensure effective integration of PETs with existing infrastructure it is essential that all affected stakeholders are involved at the earliest in the solution design discussions.

Author:

Bryan Tan

Senior Director, Global Privacy Program Management and Regional Privacy Officer, APAC
ADP

Bryan Tan

Senior Director, Global Privacy Program Management and Regional Privacy Officer, APAC
ADP

Author:

Erick Aviles

Privacy Lead for Korea, Japan, Australia and NZ
Viatris

Erick Aviles

Privacy Lead for Korea, Japan, Australia and NZ
Viatris

Author:

Lawrence Wee

Deputy Director
Ministry of Health, Singapore

Lawrence Wee

Deputy Director
Ministry of Health, Singapore