E81 / February 26, 2026 / 37:49
Quantum LDPC error correction
with Larry Cohen and Paul Webster
Breaking Down RSA: How QLDPC Codes Cut Quantum Computing Requirements by an Order of Magnitude
Why this episode matters
- Why QLDPC codes outperform surface codes — How throwing out nearest-neighbor connectivity assumptions unlocks better physical-to-logical qubit ratios across multiple hardware platforms
- The algorithmic tricks that matter — How shared register reads and parallelization techniques can dramatically reduce runtime on slower quantum hardware platforms like trapped ions and neutral atoms
- What "hardware agnostic" really means — Why developing error correction methods that work across superconducting, trapped ion, photonic, and neutral atom platforms is crucial for the quantum ecosystem
- How generalized ladder surgery enables logical operations — The breakthrough that made QLDPC codes viable for full quantum computation, not just quantum memory storage
- Why decoding remains the bottleneck — The real-time classical computation challenges that still need solving to make fault-tolerant quantum computing practical
- The business model emerging around quantum architecture — How companies like Iceberg are positioning themselves as the "ARM or Nvidia" of quantum computing through specialized fault-tolerant designs
- What cryptographers should know now — Why the timeline for cryptographically relevant quantum computers may be compressing faster than expected, and why algorithmic improvements matter as much as hardware scaling
Resources & links
- Iceberg Quantum's Pinnacle paper — "Reducing the Overhead of Quantum Error Correction with QLDPC Codes"
- Craig Gidney's foundational Shor's algorithm optimization work
- Scott Aaronson's blog analysis of the research implications
qubitsok — Cut Noise. Work Quantum. The quantum computing job board and arXiv research digest built for the community. - Job seekers & researchers: Subscribe free at qubitsok.com — weekly job alerts + daily paper digest filtered by 400+ quantum tags. - Hiring managers: Post your quantum role and reach 500+ targeted subscribers. Use code NEWQUANTUMERA-50 for 50% off your first listing at qubitsok.com/post-job.
Key insights & quotes
- "We think this is an immensely fundamentally valuable thing to do — when hardware improvements and reduced resource requirements converge, we'll be able to do something useful." — Larry, Iceberg Quantum CSO
- "It would probably be a big mistake to assume that the numbers are not going to keep going down" — on future resource requirement reductions for RSA breaking
- "At every level of scaling, new challenges emerge — it's not just a matter of taking a zero off your number" — Paul Webster on why order-of-magnitude improvements translate to real timeline changes
- "There's no obvious reason why something like the Pinnacle architecture wouldn't have an obvious impact once hardware companies reach hundreds of thousands of qubits" — on practical implementation timelines
- "This is why it's so important to have this broader perspective and not be too dependent on the assumptions of one hardware platform" — on the value of hardware-agnostic approaches
Speaker 1: Welcome to the first bonus episode of the new Quantum Era podcast. The show will be sticking to its regular weekly schedule, new episodes every Monday. But every now and then, something happens that's timely enough that I wanna get it to you right away, and this is one of those times. Iceberg Quantum from Australia made a big splash last week with the announcement of their pinnacle architecture and the results they got with it. Namely, a dramatic reduction in the number of physical qubits required to crack RSA twenty forty eight using Shor's algorithm. Now this is a theoretical result. Nobody's about to break RSA today or anytime soon, but it's still really newsworthy. Originally, resource estimates for factoring RSA twenty forty eight on a quantum computer were around 20,000,000 physical qubits. Then last year, Craig Gitney of Google published a paper that brought that number down to about a million, and now Iceberg is claiming fewer than a 100,000. Gidney's paper made a huge splash with a 20 fold increase in efficiency, and what we're talking about here is an order of magnitude more efficient than that result. That is a massive leap. Of course, there's a reason this particular benchmark captures people's imaginations. Shor's algorithm is one of those things that first put quantum computing on the map. It makes quantum computing tangible. You don't have to explain what a Hamiltonian is or why simulating subatomic particles is gonna matter. Everyone understands encryption, and everyone understands what it would mean to break it. In fact, there's a great piece of history here. When Peter Shore first presented his factoring algorithm at a seminar, he was introduced by Len Edelman. Len Edelman, the a in RSA, one of the inventors of very encryption scheme that Shor's algorithm threatens. And for movie buffs out there, Edelman had also served as a mathematics adviser on a film released just three years earlier, Sneakers, starring Robert Redford, about a team that gets hold of a device that can crack any code. So in a way, this is an example of, life imitating art. Before we get into it, a quick word from today's sponsor. This episode is brought to you by Qubits Okay, a dedicated job board and research digest for quantum computing. If you're building a career in quantum or hiring for one, this is the signal in the noise. Every week, they send out curated job alerts filtered by over 400 specialized quantum tags, trapped ions, QEC, quantum ML, all of it, and they run a daily archive digest that surfaces the papers that actually matter to your specific research interest. This is actually how I found Qubitsokay. Initially, I've been trying all the various ways to make sense of the enormous volume on Quant pH, and Qubitsokay really does that for me. And I I go there on a daily basis, and that's what makes them really powerful as a a job listing site. It's free to subscribe. Head to cube cubitsokay.com and sign in for in, like, thirty seconds. It's free to subscribe. Head to cubitsokay.com and sign up in thirty seconds. And if you're hiring in Quantum, use the code new quantum era dash 50 for 50% off your first job posting. That's cubitsokay.com, code new quantum era dash 50. The link is in the notes. So today's conversation. I'm joined by Larry Cohen and Paul Webster from Iceberg Quantum. Larry is the cofounder and chief science officer. Paul is their quantum architect. Both did their PhDs at the University of Sydney under Steven Bartlett, one of the world's leading researchers in quantum error correction. They've spun out of that group to build what they call the arm of quantum computing, a company that aims to design fault tolerant architectures for any hardware platform. You'll hear about the pinnacle architecture, how quantum LDPC codes beat the standard surface code approach and efficiency, what the RSA result really means and what it doesn't, and where they think this is all headed. Let's get into it. Thank you very much for joining me on the podcast today. I saw the paper that you posted to the archive last week sometime and immediately knew I wanted to have you on and shot an email over Paul, and you very graciously responded quickly, and agreed to be on the podcast. It's very timely. It feels like I'm almost breaking news by having you guys on. The paper was about using QLDPC error correction, and an optimization of Shor's algorithm based, I think, on some prior optimization work to get the resource estimation for breaking RSA 2048 down to about a a 100,000 physical cubits, which is an an order of magnitude smaller than the prior resource estimation. So that's what caught my eye and I think everybody else's. So welcome to the show. And do you want to start just by quickly introducing yourself? We'll go with Larry first. Speaker 2: Yeah. First of all, thanks for inviting us. It's great to be here. Yeah. So I'm Larry. I'm I'm one of the cofounders and also the chief science officer at Iceberg. And and we're a company really just dedicated to developing fault tolerant architectures for quantum computers, and in particular, developing fault tolerant architectures with improved, you know, physical overheads, you know, when compared to what people kind of expected you'd be able to do a few years ago. That's really kind of our mission. It's just to bring down the number of qubits you need to, you know, do use for quantum computing. Speaker 1: Excellent. And Paul? Speaker 3: Yeah. My name is Paul. Yeah. So I was just excited by the vision, I guess, that Larry and his cofounder, Simon Felix, have about Asper Quantum. So, yeah, they hired me about a year ago. So now the the architectural leads are in that role. Was overseeing a lot of trying to put their vision into practice to try and actually realize the things that we hope we can do. Speaker 1: Yeah. Awesome. Okay. Great. So, I mean, I guess, what's you know, there's a ton of stuff to unpack about this paper and about Iceberg itself and the pinnacle sort of almost system architecture that you're working on in a sense. I guess this the the starting point is, you know, QLDPC is relatively new in the topic of quantum error correction. I think we're more familiar with hearing surface code. That's what Google's Willow demonstration was implementing. What what is QLDPC, and and what why did you sort of select to work on on that error correction method? Speaker 2: Yeah. So, I mean, maybe a bit of, I guess, background. Obviously, if it's, you know, quantum computers need error correction and fault tolerance to to be able to operate because, you know, the qubits and the various the hardware and the gates, they were also noisy, and so you need to do something. And so for a long time, you know, people were very focused on something like the surface code. And the appeal of the surface code is that it has this nearest neighbor connectivity. Right? So you can lay out your qubits on a chip, and they only need to talk to the other qubits that are right next to them. And, obviously, that is quite quite appealing, especially if you're building certain kind of hardware like superconducting qubits. So I guess QLDPC tries to kinda change that up a bit by saying, well, assume you can build hardware that isn't required to just be able to, you know, where the qubits don't just need to talk to the qubits that are right next to each other. What can you do with that, for example? And it turns out you could do quite a lot. And so in particular, like, even on a quite theoretical level, we know that there are these bounds to how densely you can pack information when everything needs to be local. Right? So with just the surface code, this is just inherent kind of overhead. You need a certain number of physical qubits to be able to get a certain number of logical qubits out. And so it's quantum ODBC code that's kind of no longer the case. And so now you can imagine having like an error correcting code block, which can encode like multiple logical qubits. And and yeah, potentially, like, a lot more than you can in a surface code. And obviously, that's very appealing because, you know, now you can do useful quantum computing with with far fewer physical qubits. Speaker 1: So just to to interject for a second. So the the the physical constraint of the next near next neighbor connectivity and superconduct, that's sort of the the lattice, sort of almost naive approach to laying out qubits on in a a square grid, and you've got, you know, sort of the we've all seen the picture of of surface code being drawn out as a diagram that almost exactly mirrors the topology of the cubits on the physical chip. So what you're saying is that if you if you sort of throw out that nearest neighbor connectivity as a as an assumption, and you you assume that you can connect to qubits that are further physically further away in in the the system architecture, that that opens the door to to different mathematical approaches to error correction. Is that is that accurate? Speaker 2: Yeah. That's that's exactly right. And in particular now, you know, I mean, people have this picture in their mind exactly like that of people laying out qubits on a chip. But, you know, now we have all these different kind of hardware platforms that are that are kind of becoming quite prominent, like neutral items, like trapped like spin qubits, like photonics. And they just don't have these same constraints. Right? You know, with trapped ions and spin and and neutral energy, can shuttle them around. With photonics, you can move things kinda wherever you want, you know, using optical cables. And so, you know, there's this whole plethora of different hardware platforms now that just don't have these constraints. And, yeah, it's why it's such an exciting time to work on these kind of photon architectures that don't assume this anymore, so that you can really like find out, you know, what can we actually do, like how how far can we go in terms of bringing down the numbers. Speaker 1: Right. So so it's more potentially more efficient in terms of, you know, physical cubits to logical cubit ratio. And and LDPC itself is a classical error correction approach that I think it dates from the early sixties, if I'm not mistaken. When and where? Like, who who sort of had the insight that it could be adapted to to add a cue in front of it, which sounds like sort of the the stereotypical thing that any Yeah. You know, technology has put a q in it, and now it's quantum. Right? It seems to be working here. Speaker 2: I think people started thinking about this a while ago, actually. Maybe like the first paper in this came out twenty years ago. It's a very natural thing. In some ways, it's more of a natural thing to think about in in quantum coding than it is in classical coding. And the reason for that is because so what what is ODBC? It stands for low density parity check. But essentially, what that means is maybe qubits need to be able to talk to each other non locally, but they don't need to be able to talk to a lot about the qubits. Each qubit only needs to be able to be connected to a few other qubits. And, obviously, when you're trying to build quantum hardware, that is actually immensely useful. Even if the nonlocality is a bit tricky up, the fact that it only needs to be able to talk to a few is for actually a whole variety of reasons very useful. And so it's actually like a supernatural thing to think about in term in quantum computing. So people have been thinking about it for a while, but mostly from a very theoretical kind of standpoint, and less so from a from a practical how do we actually implement this Right. Standpoint. Speaker 1: And when you say non local connectivity, you mean that just in sort of a physical distance way, not a quantum entanglement way. Is that is that right? Speaker 2: Yeah. Like, you wanted to lay if you wanted to lay it out in two d, you wouldn't be able to do it in a local way. Speaker 1: Right. Right. And in fact, I think that's why IBM's approach is a is a biplanar chip. Right? They've got two planes to the chip so they can actually have these longer distant connections in addition to the the nearest neighbor, the the the qubits that are much closer. So so my my impression of QLDBC initially was that it had a lot of promise for sort of longer term storage of quantum in quantum memories kind of of scenarios. But there were some hurdles for performing logical operations. Is that is that right? Speaker 2: Yeah. But this is actually, like, exactly this thing that I worked on during my PhD and and kind of why we started this company. I drew my Speaker 1: smart question. Speaker 2: Yeah. Yeah. Yeah. During my PhD, I exactly worked on this problem of how do you actually do logic in commonality code and not just how do you use them as a memory. And that, I think, really kind of, for me at least, kind of, yeah, started kind of this whole journey. And then there's been a lot of great work since on on on that same problem and a lot of great work on the hardware side as well. And so it's all kind of converged to start to make quantum ODBC codes really, like, viable solution, not just for for like memory in a in a quantum computer, but really for doing everything essentially. Speaker 1: Right. Right. And so your approach, you're you're doing some form of magic state distillation or or something. Is that is that correct? I mean, I was reading through you. You were talking about the blocks being able to do almost being a magic state engine, it sounded like. Is that right? Speaker 2: Paul, maybe you wanna. Speaker 3: Yeah. Yeah. There's are a couple of different things happening there. So the magic engines are sort of a different part of it from the main fusion that that Larry was talking about that he made in his work. So Larry's approach is really what's called generalized ladder surgery or just generalized surgery, which is really a way to take a code and append some extra system to the code that allows you to do certain quantum measurements that have the effect equivalent to performing quantum gates, the kinds of unitary operations that you'd usually do. So in that sense, it's really it has, I guess, in common with magic state injection and those kinds of methods is, yeah, that you're adding another system in to solve a problem. But it's quite different in that it's not dependent on this idea of necessarily preparing a state that you can inject into the system. Instead, it's really about just allowing yourself to do certain measurements that reproduce the effects that you would get from directly doing the gates. Speaker 1: Oh, cool. Speaker 3: It's it's a generalization of the standard method used in surface codes, which is called butter surgery, where the original term comes from. In that case, you're really talking about having different patches of surface codes, and you merge and split them and different ways of merging. And splitting them allows you to do different, again, measurements that are equivalent to the gates. This is a generalization of that where instead of just merging or splitting through different systems, you've really prepared a different kind of system that can merge and split in different ways to your code in order to do these different measurements. Whereas the magic engine is is a different aspect of that, which is really about adding in some extra ingredient there that allows you to do the kinds of gates, what we call non Clifford gates that you can't do with the generalized surgery. And that too is surface code methods where they do a simple approach. Speaker 1: Yeah. I mean, that's kind of the challenge with either the QARA approach or the IBM approach is that the the QLDPC code is protecting Clifford operations, but not poly operations. Is that right? And they have to sort of perform that magic state distillation over the last surgery to to to get those poly operators. Is that is that Speaker 3: So it's not quite the the poly operators in some sense are the most basic level. So we're able to do them in either the surface code or the code. The next level up is what's called these Clifford operations. And those are the kinds of gates that until Larry's work in the last few years, it wasn't very obvious how to do them. Those were what people those in a sense were the roadblock. People weren't sure the best way to do these Clifford operations in LDBC codes. The next level up, what you need to add to the Clifford operations to to be able to do a universal quantum computation. So all of the gates that you would need to do to actually do a quantum algorithm instead of something that could be done on a classical computer. Those are what we call, not very imaginatively, non Clifford operations. Those are what we get in injecting the magic states. So the magic m one idea that we have in the paper, that's a relatively straightforward generalization of the approaches that people do in surface codes where they talk about having magic state factories. You produce magic state, and you inject a magic state in. And that state that you inject in is like it's a resource. It's a tool where once you have this state, that allows you to do non Clifford gates when you couldn't before. The magic engine is a way to really make that process work in LDPC codes. So it's just a way to kind of really to lubricate that process to make sure that we've got this constant input of of non Clifford gates in this more complicated setting where we don't have lots of surface code patches that we're merging and splitting in the way that people have passed. Speaker 1: Interesting. Interesting. And so, I mean, returning to the paper, you know, there's a there's obvious reasons why SHORE is is used as a a benchmark. It's well understood. It's the first sort of you know, it's the thing that kicked off in many ways the the serious interest in quantum computing. It's it's easily translatable across other technical domains, even business domains. The idea of breaking encryption features in I mean, motion pictures like sneakers, for example. Everybody knows it. But, I mean, in a sense, it's it's still you're still talking about a 100,000 physical cubits. That's still quite a distance away for in in terms of engineering and shipping devices. What's the the input like, how does this paper and this work relate to what Iceberg is trying to do as a as a as a startup, essentially? What's the the the business strategy you're you're actually pursuing? Speaker 2: Yeah. I mean, so yeah. I mean, like, on a, you know, on a technical level, we started to to develop these kind of photon and architectures. And, obviously, you know, the reason for that is because we think that by really having a company that's just kinda focused on doing this kind of work, we can really kinda push the boundary of of what's possible here. Right? And and so this is kind of a first example of that. You know, I guess for a while now, people in the field have thought, you know, we could use quantum LDPC codes to bring down the the overheads, but no one had, like, properly crunched the numbers in, a proper kind of quite rigorous resource estimate. And I think this is kind of, you know, one of the first kind of big attempts to do that. But but, you know, it certainly won't be the last. And so, you know, I don't think yeah. Like, this is not the end of the story. There'll be more improvements. And so, you know, as a company, our position is that, you know, we think this is like an immensely valuable fundamentally valuable thing to do. Right? Like, when you think about the things that go into building a quantum computer, you know, we need to obviously, hardware people will keep on improving the hardware. And, you know, people like us and and companies like Iceberg will kind of bring down the numbers that you need to do kind of these useful algorithms, things like RSA or, like, material science simulations, quantum chemistry simulations. Right. And you know when Speaker 1: in there as well. So yeah. Speaker 2: Yeah. Yeah. And when the Fermi Hubbard stuff. And so when these things kind Speaker 1: of Fermi Hubbard. That's right. Speaker 2: Converge and meet, you know, we'll be able to do something useful. And and so we work with, like, various hardware companies to to kind of see these kind of architectures implemented on this hardware. So we, like, work with trapped ions and photonics and spin qubits at the moment, you know, to see how architectures like Pinnacle and and other kind of work we're doing does kind of will impact their their timeline and and fund them eventually to see this kind of realized on on some actual quantum hardware. Speaker 1: Yeah. Yeah. I noticed that you were you on your website, you list a variety of hardware partners. So there's a there's and I think you call it in the paper hardware agnostic in your approach. So in a way, this is sort of advancing the theory into a workable model that can be potentially implemented on multiple hardware modalities, which I mean, that that's obviously really, really valuable. Do you see yourselves as as almost, you know, a consulting firm providing that expertise, or are you trying to productize this in some sort of, like, a, you know, firmware components that could be licensed by hardware vendors? Speaker 2: Yeah. I think much more the the latter than the former. You know? I think especially, you know, if you look at something like the classical space, some of the most, you know, valuable companies today in the classical space are not the ones that actually fab the chip. Right? They're the ones that kind of operate at this more design level. Right? So companies like NVIDIA or or ARM, you know, they don't stab things things themselves. They just kind of design the chip and and and and then other people actually make them. And I guess, you know, our our belief is that, you know, it's gonna be quite similar in quantum computing. It's that one of the big quantum computing companies will be a company that operates at this layer of, like, designing the architecture and then works in conjunction with, you know, the people that build the hardware to actually build the quantum computer. Right? And I guess there are in terms of, you know, like, the business model and and how you make money. There there are lots of particular ways that that can go, and you see that even in the classical space. Some companies license outset products. But, you know, I think the core belief is that there is, like, a lot of value just at this level of designing the architecture. Speaker 1: Yeah. And so that's really where we wanna operate. Safe to say I mean, error correction is where you're starting, but to be an NVIDIA or an ARM in Quantum, you'll have to have full sort of system architectures come at least at a component level that you can customize and use for, you know, the Apples or the the Intels or the AMD's of the world. Right? Speaker 2: Yeah. And I mean, there's also like a whole software ecosystem you can build around it. Right? Like, even just on the error correction side, you know, there's, like, decoding and actually how you do that and how you implement that. And there's all the software to actually, like, orchestrate the architecture and and actually, like, on runtime, know what to do, the software to, like, compile down the algorithm. I mean, there's a there's a whole bunch of of things that you can kind of build around this core kind of product. Right? So it's it's pretty, I think, limitless what we can do. Speaker 1: Yeah. And and in the paper, you did call out that that readout is sort of an open open problem still. Right? I mean, the that there's it's very computationally challenging because of the nature of the the the architect that the TLBTC algorithm has a lot of computational challenges for syndrome detection. Is that is that right? Paul, is that something you're you're tackling? Speaker 3: Yeah. Definitely. So I think definitely this problem of what we call decoding, which is really you you do some measurements, you find out what we call an error syndrome, which tells you about, you know, information about what kind of errors might have happened in the system. And then we run this classical algorithm called the decoding algorithm that tells you, okay, given what we found out about the error syndrome, what's the likely is there likely to be an error that's occurred? Is there a correction that we need to make? That process then of reading out that information during the coding and then making sure that we're able to correct our system as we go. Doing that in real time, I think, is a challenge across all kinds of codes. I think even the surface code, it remains an open one to actually get it at that pace of something like the order of microns you would need to keep up with a superconducting processor in particular. But it's true that for LDPC codes, it's even more of an open problem. I think that a lot of work has been done in recent years about methods like what's called belief propagation. These different algorithms that you can use to try and solve this decoding problem for LDPC codes, but it really is a really interesting area. So we at iSpark are looking more into that as well about this this problem of decoding, trying to find good approaches that that solve that problem. But definitely, it's one of the really exciting areas, I think, in this space, just like we talked about in the past. Maybe the logical gates were were a barrier that people are working on overcoming. We'll see some really exciting progress in the next few years about this decoding, you know, Speaker 1: And I guess, I mean, an area of exploration is sort of the integration, the tight integration of of GPUs into these quantum devices along the lines of quantum machines, OPX, and DGX integration. Right? But, I mean, is it my my instinct says that ultimately, there'd need to be very customized silicon and maybe even a type of classical processor for decoding that we don't even have yet. Is that is that sort of a a fair guess? Speaker 3: Yeah. It's it's really outside my expertise to talk about the classical hardware, but I think definitely that's my understanding too is that we'll be dealing when you really get to these devices with really specialized hardware as well as so there's really multiple steps to the process, I think. That's the process of doing the theoretical work, developing these algorithms. And then, absolutely, there'll be an engineering challenge to really build out the whole software and hardware ecosystem that's gonna be supporting that real time device. Speaker 2: Which I think is not to say that GPUs won't be very relevant. I think it's it's right now. I mean, I think we need to throw everything we can at the problem, and I definitely think that something like GPUs have a lot of advantages. Even on, like, the decoding side because of how parallel they are. There are certain kind of, you know, decoding approaches for which it would be quite natural to to use something like a GPU. So I think I think it's still a bit up in the air, you know. Like, there'll be a lot of different parts that go into this. Speaker 1: Yeah. I mean, you you mentioned, Paul, you know, sort of the the hardware implementation being out of out of your scope of expertise, but you guys are working closely with a number of different hardware vendors at different modalities. Do do you have a sense from them of what sort of level of, you know, acceleration or or you know, how does how does this unlock them? Does this compress their timelines in terms of getting to, you know, some kind of quantum advantage, or or what what does that look like for them, do you think? Speaker 2: Yeah. I mean, I definitely think that, I mean, it would be wrong to say that some you know, we don't want to compress timelines. Right? So I think approaches like this definitely do that. And if, you know, if you look at their kind of road maps, I think certainly our hardware partners and and other hardware companies as well all want to scale up to something like tens of thousands or hundreds of thousands of cubits kind of by by the end of the decade. Right? And so, you know, there are still lots of challenges to to get to that, but it's definitely, you know, feels quite plausible that that we will see some real progress this decade. And that, you know, if if a if a hardware company can kind of reach this 100,000 kind of number, of thousands of numbers, and, you know, there's no obvious reason why, you know, something, you know, like like a clinical architecture wouldn't have an obvious impact there. Speaker 3: And I'd just I mean, I'd just add to that as well. I think that it's it's important to emphasize that really at all levels of scaling, there's challenges to overcome with this hardware. So you might imagine, oh, once you've got a thousand cubits, you just make a thousand times that and make a million? This is something I think John Martinez has written really well, and he's working on this new company that really there's different challenges at each level. And really, for example, this 100,000 to a million cubit challenge, you know, there there's a whole bunch of questions in surface codes on superconducting cubits. For example, how do you manage needing multiple dilution refrigerators networking these things together? At every level, new challenges emerge, and that's, I think, why these these step changes of an order of magnitude here and order of magnitude there, they could actually really translate to real time line changes. It's not just a matter of taking his ear off your number. Speaker 1: Yeah. Well, and that's you know, you there's there's the QLDPC work, but there's also algorithm optimization. You were leveraging work that Craig Gitney had done, but then you introduced some really clever algorithmic optimization as well. There's the reading from the single register because the the reads are not are commutative. Right? Or is there is like a there was a trick with reading the data out of the register, a shared read, I think. Speaker 3: Yeah. I think this is a really good example of where it's important to have this hardware agnostic or broader approach. I think I mean, my guess I wouldn't want to put words in this mouth, but me my guess would be that this is something that Gidney in his work probably understood this is in principle a trick you could use. But in his framework of sweeping ducting qubits that have this very fast run time, it wasn't really relevant to this problem he was solving, which meant I think for that reason he hadn't fleshed out. He hadn't presented this work. But when you move to a different hardware platform like mutual atoms or trapped ions that might have a much slower cycle time, might have much longer run times, these kinds of approaches make a huge difference because this problem how can you paralyze this if you've got a hardware platform where every operation takes a thousand times as long on this platform as it does on superconducting qubits? Now it's not just a matter of how many qubits do I need. It's very much how do I make sure this thing is gonna run-in a week and not fifty years. And that really is the important thing there is it's a way to be able to really scale kind of trade off the number of qubits against time to recognize, okay. I need to bring this runtime below a target a month, a week, something realistic. I need some kind of some lever I can pull to make that trade off in a way that's compatible with my hardware. So that was really the focus there was on thinking, okay. This works fine in that superconducting regime. When we look at a broader range of regimes like we do in the paper, how do we solve these problems? We want to present results saying, you can do this with a 100,000 cubits, but we'll take a hundred years because that would you know, that wouldn't be I don't think, to be honest, I don't think it'd be interesting to anyone who you want those details. But, absolutely, that was really the the algorithmic side. So I don't see it so as it's not a huge step in the algorithmic direction. I think that this is to say something that experts in that area kind of a natural step forward. I see it more as an example of why it's so important to have this broader perspective to not be too dependent on the assumptions of one hardware platform. And I think when you have, obviously, so many experts at a company like Google doing such great work, it's easy for everyone else to sort of just depend on on their work without necessarily knowing the opportunities or challenges that might come up in different contexts. Speaker 2: Yeah. Speaker 1: Yeah. Yeah. And I think there's a lesson there in in the you know, we now have something like a 100 different quantum hardware companies. That's that diversity is something is a resource we can leverage to try to come up with, you know, as many different creative ways to as you said, every level of this has serious problems and challenges that are not you know, so the science is not solved. It's not merely engineering. They're fundamental questions. So I feel like creative, you know, open mindedness and broad broad sort of horizons of thinking are really critical. I I noticed also not only I mean, you've got sort of Scott Aronson weighing in on his blog saying it's serious work, and he he's, you know, impressed by it. He has the same sort of caveats about there being a lot of engineering to be done. But you also acknowledged him in the paper for thoughtful feedback on the title of the paper. Can I ask what the original title was going to be? Speaker 2: Yeah. Originally, it was going to be, I think, how to break RSA is 100,000 cubits. And I suppose the concern was out in the public that might look like we've already done it, which we certainly have not. No one needs to be too worried right now. And so, yeah, that was the change, which it to how we could reduce it, reducing the overhead. Speaker 3: We wanted to acknowledge Scott because we did appreciate a lot. Obviously, he's had a a whole lot of experience of how every quantum can be misunderstood. Yes. Obviously, it took him a moment to see I I know what what people are gonna think of this and Speaker 1: Yeah. Experience with something. Yeah. He's been right out on the on the knife's edge of how the Internet reacts to research topics and papers. A good person to draw, a good resource. So what what's next for for Iceberg and for the pinnacle approach? What's sort of the next big milestones you're hoping to accomplish in the next, you know, twelve months? Speaker 2: Yeah. I mean, there's just like a lot in the pipeline. I mean, in general, we're we're expanding. So, I mean, we just put out Pinnacle and at the same time kind of announced our fundraise. And so we're just growing in general. We're opening an office in Berlin. We're growing our business in The US, and and we're bringing in, like, great people, which is exciting because great people do Speaker 1: great those those new offices, is that more about tapping into specific talent you wanna add to the team, or is it more, like, market presence with hardware vendors that you wanna partner with? Speaker 2: Yeah. More so the the former. I mean, we've kind of, I think, almost saturated all the good people in Sydney, and so we just need to, like, go global. There's a lot of great people, obviously, in Europe and in The US. And we yeah. I mean, they'll come on board. And then, you know, over the next year, I think we just wanna, like, keep on kind of improving the numbers. Like, you know, I think pinnacle is not kinda the end of the story. There's no obvious Speaker 1: got another order of magnitude, Daniel? Look. There's Speaker 2: from where we're sitting, there's no obvious law to where the numbers can get to. And I think, you know, especially for for such sensitive things like breaking her say, it would probably be a big mistake to assume that the numbers are not going to keep going down. Speaker 1: That's really cool. Well, thank you so much for your time. It's a fascinating topic. It's really I agree. It's very important work. And anytime you can lower something by an order of magnitude, given all the challenges, it is notable, important, and appreciated. So thank you very much, guys. Speaker 3: Thanks, Vasu. Speaker 2: Yeah. Thanks for having us. Speaker 1: Thanks again to Paul and Larry from Iceberg Quantum for such a great conversation. I'm really interested in what they're doing. I find quantum error correction as in general as a topic really really interesting. I hope you do too. This episode again brought to you by qubitsokay.com. The signal and the noise for your quantum job search. Learn more at qubitsokay.com. If you want to go deeper on the topics we cover, research highlights, commentary, and the occasional behind the scenes look at what's coming up for the show, sign up for the newsletter at newquantumera.com. It's free and it's the best way to stay in the loop. And if you have an idea for a guest, a topic you'd like us to dig into or just want to say hello, reach out through the website. I read everything. If you're enjoying the show, please subscribe wherever you listen to podcasts and leave a rating or review. It genuinely helps new listeners find us. If you know someone who's quantum curious, send them an episode. Word-of-mouth is still how most people discover the podcast. Thanks for listening. I'm Sebastian Hassinger, and this has been the new quantum era, theme music by OCH. See you next time.
creators and guest
stay curious



