In his Intel Innovation opening keynote Tuesday, Intel CEO Pat Gelsinger reminded his audience of the adage that “software is the soul of the machine.” On Day 2 of Innovation, Intel Chief Technology Officer Greg Lavender takes the baton to showcase how Intel is working to unleash developer productivity and innovation.
Greg will talk through the many assets and resources available to developers as part of Intel’s software-first strategy. And he’ll invite some customers, partners and colleagues to preview what these all make possible.
Live Blog: Follow along below for real-time updates of this live event from the San Jose McEnery Convention Center.
8:15 a.m.: Hello and welcome! This is Jeremy Schultz, communications manager at Intel, and thank you for joining me once again for the live show at Intel Innovation. We’re getting started earlier this morning, so grab your Intel Geti computer vision-improved coffee and get ready to talk fast code and open acceleration.
8:25 a.m.: Pat Gelsinger’s keynote yesterday leaned a bit more on the silicon side of Intel’s “software-defined, silicon-enhanced” world, so today Greg’s expected to rebalance the scale.
8:30 a.m.: Here we go! Things are starting up with a video that leads with a Greg quote: “Software is at the core of everything we do.” And the tagline: “Intel software; pioneer the impossible.” I like a little swagger! Intel software has been underestimated for too long. 🦾
8:32 a.m.: It’s CTO go-time, here’s Greg!
“Welcome to the second day of Intel Innovation,” Greg says, “an event by developers, for developers.”
Greg stops to reveal his t-shirt, which reads simply “open.intel” to “reflect our commitment to an ecosystem founded on openness, choice and trust.”
Greg’s been a developer himself for over 25 years, programming “every successive generation of Intel processors” since the 8085.
8:35 a.m.: His job? “My team and I are working hard to remove the barriers that impede your ability to maximize your productivity.”
Even if you didn’t realize it, “a survey conducted by Evans Data Corporation in 2021 revealed that 90% of developers are using software developed and/or optimized by Intel.” More than likely, you’re one of them. 🤗
8:36 a.m.: The agenda for today: dig into some Intel software assets and resources and “geek out through some examples and live demos.” 🤓 It’s all framed by “our foundational open ecosystem approach based on choice and trust.”
8:37 a.m.: Open ecosystems “have proven to promote growth and innovation,” Greg asserts.
The Linux kernel is one example among many, and Intel has been “the top corporate contributor to the Linux kernel for 10 of the past 20 years.” 📝
8:38 a.m.: “Openness is the nutrition that fosters collaborative engineering and co-innovation,” Greg says, leading to projects like the Infrastructure Programmer Development Kit or IPDK.
The first release of the IPDK was recently announced, and it’s now a part of the Open Programmable Infrastructure (OPI) Project, launched in June. The vision is “an open, vendor-agnostic framework of drivers and APIs for infrastructure offload and management.”
Similarly, the open source Storage Performance Development Kit, better known as SPDK, speeds up and enables the NVMe storage protocol, Greg explains. Today, SPDK has community contributions from over 60 companies.
8:40 a.m.: Intel is bringing this same open approach to AI and machine learning. TensorFlow 2.9 now includes the oneDNN library by default, giving each of its 10 million downloaders (so far) an up-to 3x performance gain.
Intel is also working with Google as a founding member and contributor to the OpenXLA project, to build standard, open source, modular compiler technologies. Quite a compilation for AI.
8:42 a.m.: If you don’t know the history of “write once, run everywhere,” it was the slogan of the Java programming language way back in 1995. Greg says he started programming in and teaching Java the year prior, later spending 10 years at Sun Microsystems. It was there that James Gosling created Java.
And Gosling, whom Greg dubs an “industry illuminati,” now joins us live on a video call.
8:43 a.m.: James you’re on mute! “In my defense, I was left unsupervised,” reads the t-shirt in James’s portrait.
8:45 a.m.: Greg: You and Linus are my heroes when it comes to open. I want to ask: how did “write once, run everywhere come about?”
James: The idea is to write software for systems not yet invented. I wrote a solitaire game decades ago, and it still runs everywhere! On machines I couldn’t dream of.
8:48 a.m.: Greg: We’re building these GPUs across client and data center and it takes a lot of software. With AI frameworks, “it just runs.” But we don’t have an open programming model to write once, run everywhere for AI across all devices. Where should we go to enable open accelerated computing?
James: It doesn’t matter how many employees you have, there are always smarter people outside your company. This foundational software, you get more horsepower with collaboration across industries and collaboration. When you customers are involved in the process, that’s when you really find out what customers need. The whole process accelerates.
8:52 a.m.: Greg: The other key innovation was the Java spec. How does governance fit and shape open source?
James: There are many camps: Apache, kernel.org, several of them. But they all have a process, how you request features, report bugs, make contributions. And pull requests, which begin good technical debate.
8:55 a.m.: And we’re having a t-shirt throw! Geek cred achieved.
8:56 a.m.: Yesterday you added “systems foundry” to your tech dictionary, and today it’s “open accelerated computing.”
The goal of open accelerated computing, Greg explains, is to achieve that write once, run everywhere nirvana —“lifting developers out of proprietary approaches, like CUDA, and enabling them to write once with open tools like SYCL, and then compile to multiple different hardware accelerators.”
8:58 a.m.: To further open accelerated computing, “we thought, ‘why not acquire the company that has been working on this for about 20 years?” That company is Codeplay.
Codeplay has demonstrated versions of oneAPI that support not only Intel but also AMD and Nvidia hardware, as well as SYCL on RISC-V chips.
To continue driving this kind of openness, “we are pleased to announce the creation of an open and independent forum to enable the collaborative direction of the future of the oneAPI specification,” Greg says. And Codeplay will now lead it.
9:00 a.m.: With oneAPI, Greg explains, “developers can choose the best architecture for the specific problem you are trying to solve without rewriting software for the next architecture and platform.”
In December, the 2023 versions of Intel’s oneAPI Toolkits start shipping, bringing support for 4th Gen Xeon (Sapphire Rapids) and Intel’s latest GPUs and FPGAs.☝
9:02 a.m.: Intel’s investments across the software stack allow developers “to create and realize value throughout the stack,” Greg says.
“Our goal is to make it easy for developers to get the best software technology through the open-source ecosystem or as Intel-delivered products.”
9:04 a.m.: This “software-value realization” happens in three parts: foundational software, like firmware and OS kernels, is “marketing enabling”; mid-layer languages, frameworks and tools are “market differentiating”; and the top layer is “market making.”
9:06 a.m.: Intel’s Project Amber trust-as-a-service for confidential computing — a market-making example, in this case — was introduced in May at Intel Vision. 🔐
Among the first companies to team up with Intel to pilot Project Amber for real solutions is Leidos Health Group, and Greg invites Liz Porter from Leidos on stage to tell us about it.
9:07 a.m.: Liz points out that more people in the U.S. and in many parts of the world have mobile devices than have access to health care. 🥺
QTC, a Leidos company, operates mobile medical clinics for rural and underserved areas to help bridge the gap. The challenge is to use real-time data to diagnose patients, across a multitude of devices, while always protecting that data.
9:09 a.m.: Intel and Leidos are piloting Project Amber for use in these mobile clinics. Project Amber “liberates Leidos from the need to build and maintain complex, expensive attestation systems,” Liz confirms. “It allows us to focus on core differentiation.”
Saving lives and protecting privacy — thank you Liz! 🚑
9:12 a.m.: Next up: AI, home of “the most demanding workloads,” Greg notes, requiring big leaps in performance and new algorithmic breakthroughs. These leaps depend on an open AI software ecosystem.
To give developers faster on-ramps to AI, Greg has our next piece of news: in a joint effort with Accenture, “three additional AI Reference kits for the healthcare industry will help clinicians review medical code classifications, identify imaging anomalies and facilitate claims reviews.” 🩺
They’re now on GitHub, joining kits released in July and more arriving in the coming months.
9:15 a.m.: To enable open hybrid cloud architectures and edge AI, Intel is also working with Red Hat.
Red Hat Chief Technology Officer Chris Wright joins us by video to explain that Red Hat’s OpenShift Data Science has “integrated with Intel’s AI portfolio, so developers can train and deploy their models using Intel’s AI Analytics Kit and OpenVINO tools.”
Wright also announced the launch of a joint Intel and Red Hat AI Developer Program, aimed to “help developers easily learn, test and deploy models using Red Hat OpenShift Data Science and Intel’s integrated AI and edge portfolio.”
9:18 a.m.: One industry where AI could have big impact is life sciences, Greg says, “to transform drug discovery, diagnosis and treatment.”
Brian Martin heads up AI in Research and Development Information Research at AbbVie and arrives to explain how AbbVie is looking to make that happen.
9:19 a.m.: Biopharma R&D has a productivity problem, Brian says. A single drug usually takes a decade and several billion dollars to develop. 💰
AbbVie’s straightforward-yet-bold goal: double R&D productivity. That’ll require more accurate forecasting, faster hypothesis testing and more automation.
9:22 a.m.: One method they’re trying is a knowledge graph, to gather and morph data into “a fabric of shared understanding.” But building and training these models is a huge effort, so AbbVie called friends for help. 📞
Katana Graph and Intel, working together on a 4th Gen Xeon-based setup, “were able to deliver 16x speedups on distributed graph partitioning, and 4.7x speedups on distributed GNN training using Intel AMX and BFloat 16 precision,” Brian says.
“This shows us the power of combining next generation hardware and software together.”
Saving lives with software again! 🚨
9:23 a.m.: “Yesterday, Pat stole some of my thunder and mentioned Intel Developer Cloud,” Greg quips. But now Greg’s going to do more than talk about it and invites up Intel’s Harmen Van der Linde, product manager for the Intel Developer Cloud, to give us a tour.
9:26 a.m.: In recent months, Intel invited a small number of customers to try out the developer cloud and received “great feedback.”
Now it’s expanding and shifting focus to provide early access to new Intel platforms and software for pre-launch development and testing, Harmen says.
As Intel builds out the service, “we are working with sales and field teams to identify customers we can invite for the beta trial, with more to follow in the future.”
9:29 a.m.: All right, it’s flame graph time. This will make sense in a couple minutes.
Greg points out that developers “have an intense desire to understand the performance of the products we develop.” He calls up the person for the job: Brendan Gregg, Intel Fellow and “a world-class cloud performance expert to share with us his flame graphs and performance tools.”
9:30 a.m.: Brendan lays it out plain: Intel needs great tools to understand and (then) improve product performance, and customers need like tools to get the most out of Intel systems.
“It’s my dream to be able to observe everything, to understand the performance of all software and hardware,” Brendan says. Of many planned tools, he’s going to show three capabilities of a “open source profiler-as-a-service” that’s in development.
9:33 a.m.: The first is CPU flame graphs. In short, these graphs show all layers of running programs in order to visualize the parts of the stack that are holding up the CPU train, so to speak. 🚅
Second is off-CPU flame graphs, “which we are including in a public product for the first time.” These show, basically, everything else. Combined with the CPU graphs, “you have a way to easily and quickly analyze the performance of all software and all types of issues.” It’s like pro football’s “All-22” view — wide enough to see all players on the field — for your software stack. 🏟
9:34 a.m.: Third and last is CPI flame graphs, another public first, which “takes CPU flame graphs and shows processor cycle performance — the cycles-per-instruction metric.” It translates low-level performance into terms developers can understand: “their source code.”
It’s all coming soon and easy to use “in an upcoming service,” Brendan says.
9:36 a.m.: To the future we turn, to quantum computing. 🔮
“There are several qubit technologies out there,” Greg says, “but there’s only one that’s built on transistor technology and that’s the one Intel is focusing on.” Intel knows how to make transistors — gobs and gobs of them, every second, around the world — and to reach “quantum practicality” and millions of qubits, you need high-volume precision.
9:38 a.m.: Greg’s holding an “ultra, ultra cool” qubit wafer — maybe the most rainbow wafer I’ve ever seen! — built at Intel in Oregon with “10,000 arrays on this wafer, each with three to 12 qubits.”
That number is small but “but the process is stable, has a very high yield, and allows us to start building large qubit arrays.”
9:39 a.m.: There’s an app for it, too: “Intel is announcing the availability of our beta release of the Intel Quantum SDK for quantum simulation,” Greg says. “The kit allows developers to program quantum algorithms using simulated qubits.”
And he’s showing how it works. It starts off compiling, fires up needed resources, then starts running the algorithm — in this case entangling 25 qubits — and finally out come the results.
9:41 a.m.: Greg downloaded the SDK to his Dragon Canyon NUC so he can “experiment with quantum algorithms using 25 qubits.”
“I want to simulate a wormhole and see if I can transport my dog to another planet — and back.” Laika would be proud. 🛸
9:42 a.m.: But don’t forget about Y2Q. Progress toward post-quantum cryptography continues, including recent developments toward standardization and raising the urgency of the latent opportunities and risks.
These are major steps forward for our industry as it prepares to be Y2Q-ready or quantum-resistant by 2030, Greg says.
9:43 a.m.: “Today I shared a lot of technical information on how Intel is engaging with the ecosystem through a software-first, developer-first approach,” Greg says to close things out. Greg hands over the stage to AI luminary Andrew Ng and that concludes our liveblog.
Another Intel Innovation is in the books! For more details about everything Greg announced today, in addition to news and resources for everything from Day 1, don’t miss the Intel Innovation 2022 press kit.