Two leaders in large-scale computing—Chris Davidson of Hewlett Packard Enterprise and Arjun Shankar of Oak Ridge National Laboratory—are spotlighted for their roles in building and directing the kind of artificial intelligence infrastructure that underpins data-heavy industries. Their backgrounds and responsibilities center on secure, scalable AI and high‑performance systems, capabilities that frame how algorithmic tools are developed and deployed across sectors where large, fast data streams and rigorous governance are essential.

About the speakers

Chris Davidson serves as Vice President of HPC & AI Customer Solutions at Hewlett Packard Enterprise. He oversees HPE’s global approach to AI Factory solutions and Sovereign AI, working with government agencies, commercial organizations, and research institutions to establish AI capabilities that scale to national and enterprise requirements while placing security and control at the forefront. In practice, his remit spans the design, delivery, and ongoing performance of complex computing environments that are expected to train and operate advanced models reliably.

His responsibilities include directing Product Management and Performance Engineering across HPE’s high‑performance computing and AI portfolio. That portfolio ranges from platforms for training large models to Cray exascale systems purpose‑built for massive throughput and parallelism. Davidson’s teams set product direction, design performance architectures, and define deployment patterns aimed at sustaining leadership in both high‑performance and AI computing, with an emphasis on predictable performance and operational efficiency.

Over nine years at HPE, Davidson has helped shape initiatives spanning Performance Engineering, AI Cloud, and Professional Services. Those efforts have focused on delivering optimized, cloud‑native, and globally deployed high‑performance environments—systems that continuously tune for workload demands while meeting the reliability expectations of mission‑critical users. Prior to joining HPE, he held both technical and leadership roles in the biotech and medical diagnostics fields. He earned an M.B.A. in Entrepreneurship and Finance and a B.S. in Biology from Loyola University Chicago.

Mallikarjun (Arjun) Shankar is the Division Director for the National Center for Computational Science at Oak Ridge National Laboratory. His research sits at the intersection of computer science and large‑scale scientific discovery programs that depend on scalable computing and data science. The focus is on building the bridges—methods, workflows, and systems—that allow vast datasets and complex models to drive real‑world insights. Shankar is also a joint faculty appointee at the University of Tennessee’s Bredesen Center and is a senior member of both IEEE and ACM, reflecting longstanding engagement with communities that define practices for advanced computing.

AI Integration

Davidson’s remit around AI Factory solutions and Sovereign AI centers on how organizations stand up end‑to‑end pipelines for training, tuning, and operating models while retaining oversight and control. That approach addresses practical questions about scaling infrastructure across sites, standardizing performance expectations, and aligning security with policy requirements. The same design questions recur in any environment that must prove the integrity and traceability of data and models.

Within a national lab setting, Shankar’s work on scalable computing and data science emphasizes the connective tissue between high‑end compute and the scientific outcomes it serves. The core task is orchestrating workflows—moving data efficiently, allocating compute intelligently, and enabling researchers to iterate quickly—so that large models and large datasets can be productively combined. The emphasis on interoperability, reproducibility, and scale is central to trustworthy AI operations.

Technology Use Case

HPE’s portfolio, which includes large‑model training platforms alongside Cray exascale systems, reflects the compute patterns now common to state‑of‑the‑art AI: vast parallelism, accelerated hardware, and performance architectures designed for continuous throughput. Steering Product Management and Performance Engineering across this range requires aligning software stacks, networking, and storage so that end users can train bigger models and move data faster without sacrificing reliability.

At ORNL’s National Center for Computational Science, Shankar’s mandate underscores how research programs depend on scalable computing combined with modern data science techniques. The ability to coordinate massive experiments, track data provenance, and schedule resources across shared facilities ensures that complex models can be developed and validated at pace. This discipline in managing scale provides a template for any domain where data volume and model complexity converge.

Market Impact

The work described by both speakers highlights the operational realities of AI at scale. Davidson’s focus on globally deployed, cloud‑native, high‑performance systems speaks to environments where performance variability can translate to business risk and where governance over data flows and model behavior is non‑negotiable. Establishing clear performance baselines and repeatable deployment patterns is central to maintaining service quality as models and datasets grow.

Shankar’s emphasis on bridging computer science with large scientific campaigns points to the management of complex, multiparty workflows. When numerous teams share infrastructure, predictable scheduling, clear interfaces, and robust data handling become as important as raw compute. That balance—between speed, control, and collaborative use—has become a defining constraint in AI operations more broadly.

Industry Response

Davidson’s collaboration with governments, enterprises, and research institutions illustrates how AI infrastructure has become a cross‑sector undertaking. Different stakeholders may prioritize sovereignty, compliance, or rapid experimentation, but they share a requirement for systems that can scale without eroding control. Product and performance strategies that cater to those shared needs help create common operating models for AI, regardless of the specific domain.

Shankar’s affiliations and leadership at a national laboratory reflect a similar need for common frameworks. Senior involvement with professional communities such as IEEE and ACM aligns with the codification of standards and best practices that allow large systems to interoperate and evolve. The outcomes—clear architectures, durable workflows, and transparent methods—are what allow organizations to sustain AI programs over the long term.

What to watch

Taken together, these speaker profiles outline complementary facets of the same challenge: creating AI systems that are both powerful and manageable. Davidson’s focus on product direction, performance architecture, and deployment discipline provides the commercial and operational scaffolding. Shankar’s concentration on scalable computing and data science provides the methodological and research‑driven path to turning compute into discovery.

The common thread is not a single technique or platform but the governance of complexity at scale. Whether the goal is national‑level capability or enterprise‑level reliability, the underlying requirements—security, performance predictability, and workflow clarity—remain constant. By detailing how they approach those requirements in their respective institutions, the speakers offer a view into the evolving playbook for high‑performance AI.

As organizations evaluate how to support ever larger models and increasingly data‑intensive workloads, the perspectives of Davidson and Shankar situate today’s infrastructure decisions within a broader, steadily maturing discipline. Their roles center on building the foundations—technical, operational, and collaborative—that make advanced computing practical and sustainable.