The following is a rough transcript which has not been revised by The Jim Rutt Show or Sergey Kuprienko. Please check with us before using any quotations from this transcript. Thank you.
Jim: Today’s guest is Sergey Kuprienko. He’s the CEO and Co-founder along with Alex Fink, a recent guest on our show of Swarmer, a software company for drone software. And basically mostly, so far at least, for warfare as you can well imagine, in Ukraine. He’s the former head of research at Ring Ukraine, which was acquired by Amazon. And he was the head of R&D at FF Group, and he’s had other positions in tech and business development. Welcome Sergey.
Sergey: Hello, and thanks for hosting me today, Jim. I appreciate it a lot, and really glad to talk to you today of this perfect morning of the day of the Independence Day of the United States.
Jim: Yes, indeed. It’s July 4th, Independence Day in the US, which makes this a very appropriate conversation. That was where the Americans put out their call for independence. The Ukrainian people are fighting for their independence against Putin and his orcs. Now some Americans of course, don’t necessarily support Ukraine, but I’m going to put my flag in the ground and say from the beginning I have realized that this is an extraordinarily important thing, that the West has to pull together and support our amazing friends in Ukraine. I had four experts on early in the war, and they all predicted… Actually one did not. Three out of four predicted that Ukraine would get smashed within a few weeks. One of them was wiser, and he had some experience in the area.
And he said, “Not so fast. Not so fast. Ukrainians are tough people,” and so they have been, and I think it’s a morally correct decision to stand up to aggression, but frankly, from just a practical perspective, we’re wearing the hell out of the Russian army, and the brave Ukrainians are doing all the fighting and the bleeding for 5% of our defense budget. So idiots that think we should not support Ukraine, you’re fucking idiots, right? So tell your congress critters to support Ukraine. Okay, end of editorial by ranting Jim, but I thought for the 4th of July, what the hell? My blood is up, right?
If our patriots are willing to die for their freedom, we can certainly write some checks to help other people die for their freedom. And so anyway, with that, we’re going to talk to Sergey about his company, and then a little bit more broadly about drone warfare in the Ukraine conflict. And regular listeners know I’m a bit of a military history student. And one of the parallels I’ve seen is between the amazing advance in aviation during World War I, and the amazing advance of military uses of drones in the Ukraine-Russia conflict. At the start of World War I, the planes were only a little more sophisticated than Wright brothers planes, and they were only used for surveillance initially, to try to fly over the other guy’s lines, and take some pictures, see what was going on.
And then of course they said, “Well, we don’t want those guys flying over their lines.” So they invented the fighter plane. The fighter plane did not even exist as a concept until World War I, a very creaky plane with a couple of guys in it with rifles. And then they started dropping mortar bombs out of the planes by hand, reaching over the side of the open cockpit plane, drop a mortar bomb on the troops below. That was the invention of bombing. And then the fighters got better, and then the counter fighters. And then you had this arms race between the French and the Germans, basically on the Spads versus the Fokkers, and the Sopwith Camels.
And by the end of the war, the Germans were even doing long range bombing from Zeppelins. It was crazy. So aviation took us amazingly forward, and something quite similar is happening in Ukraine, where there were some military drones early in the war, and there was a whole bunch of repurposed civilian drones. And that’s where a lot of the really interesting action has been. So Sergey, why don’t you tell us from your perspective, give us your sense of the evolution of the usage of drones in the Ukraine-Russian conflict. And of course, Sergey is doing very sensitive work, so there’s some things he may not be able to tell us. But within what is reasonable to disclose, give us your view of the history of drone warfare in the Ukraine conflict.
Sergey: I think that’s a very, very appropriate view. And the connection between the planes evolution you just made and the drone, because that’s literally the same. It all started over the last couple of years, after the full scale invasion. It all started and actually happened, thanks for the fact that for the last two decades a lot of DAI guys, and the lot of enthusiasts just developed a huge community around copters, around the autopilot ecosystem, around the FPV drones, around everyone, just to have fun, and to get the work of the smallest… Basically that’s not UAVs. That’s aviation, because you still have a pilot connected to each drone. It’s still there. It’s not sitting on the drone by itself.
And the evolution by itself is pretty close. We started with agriculture drones, repurposed. We started with the small FPV drones. Right now, we see that it took like seven years for the world to develop the plane from the small fighter to the B-52. And the same here, we have the evolution from small, literally little seven inches FPV drones, to the big two meters drones, copters and planes, and five meters planes. Empowering them with different weaponry, with different arms, with different intelligence systems. And this is exactly the moment right now, when as an industry, we’re thinking about putting anti-drone systems to the drones by itself. So that’s counter drone, or counter UAVs, and et cetera, et cetera. And here we are right now.
Jim: Got you. As I discussed with Sam Oboria, well-known military strategist, last week, there are some forcing functions which like any arms race that start to shape the evolution. As we talked about on that episode, a key one out of the electronic countermeasures that the Russians have been developing. They may be dumb in some ways, but there are lots of really smart Russians. Institutionally, they act dumb, because of their command and control system, but even that beast figures things out eventually. And they have figured out how to use electronic countermeasures to jam the radio reception between the operator and the drone, not flawlessly. And I imagine they’re using things like spread spectrum to defeat that. Don’t give away any secrets.
And then of course the manpower situation. Russia is four times the size of Ukraine. And at some point the number of trained pilots that can be trained up to operate all these drones becomes a limiting factor, both of which are arguments for more automation. We think about jamming both of the wire in the air connection for the pilot, but the other thing that the Russians have figured out how to do is jam GPS, at least sometimes. And it’s very, very nice to be able to use GPS to vector, and a suicide drone to hit a oil tank or something, or a stationary target. And that’s not going to work very reliably anymore. And before long it won’t work at all, because it’s actually pretty easy to jam GPS as it turns out, with enough money. So again, those are forcing functions that now produce a strong incentive on the Ukrainians to figure out how to get around those bottlenecks.
Sergey: Absolutely correct. And the Russians jam GPS to the extent that it starts to harm the commercial version, the civil innovation, where there are many regions in Europe that has caused the Russians, when a lot officials were reported over the last several months. And speaking about the front line, to some extent it would be pretty honest to say that there is no GPS at all for the many sectors of the front line. It can be mitigated in two ways, using human pilots and human navigators. You need just to add more people to the same drone, so you have the pilot who flies, and the second navigator who helps you to figure out where the hell you are right now.
And the second point is to get some automation, to add more computer power, to add more non-GPS navigation system to the drone, which has been done by many different teams, and many different companies all around the world right now, to mitigate this point.
Jim: And then another aspect of forcing function that shapes the evolutionary context, is the counter drone warfare. A very interesting article written by Paul Maxwell from the Modern War Institute at West Point, for those, half my audience are non-Americans, West Point is our military college for the army. That’s where the officers all get their training. This guy wrote this very interesting article called Don’t Bring a Patriot to a Drone Fight. And he points out that even a fairly expensive drone is $15,000. A patriot is $4 million, right? So for the Ukrainians to shoot a patriot at a $15,000 Iranian moped in the air is a very bad economic exchange.
So another forcing function as I see it, is that Ukrainians are going to need to develop drone to drone combat capacity. And then of course, as in World War I, first they’ll take out the Russian attack drones, but then they’re going to have to defend against the Russian fighter drones trying to take out their own attack drones. Do you see that as a significant forcing function at the current state of play?
Sergey: On the one hand, there is a threat when you have a really cheap as dirt, it means scalable weaponry, to assault you on the ground or in the air, you have drones. The cheapest of them are for $300 to $400. And you can even destroy a battery. It would be pretty close to that, for such an amount of money. On the other hand, from several witness reports about the counter UIV measurement, there will be another jump or another turn of the weaponry laser, or high energetic weapon to counter drones. And you have the shot for up to 10 cents. So it will cost you 10 cents to hit one drone. You don’t even have to engage the Patriot, or Hawk, or IRIS T, et cetera, et cetera. That’s the point. And there will be another way, another turn of evolution covering drones or filling drones with some coating, or with some different [inaudible 00:10:04] to different tactics. So that’s a rat race.
Jim: Yep. I know the Israelis are working on some laser things, and the US Navy as well. Do the Ukrainians have any energy weapons ready to deploy?
Sergey: I’d be happy to confirm that we have 25,000 systems out of them, but that’s yes or no. We have several teams that work in this direction, but again, that’s a huge set of different technologies and different innovations to be done, to be integrated, to be connected together. And so I really hope that we can do it better, faster than later.
Jim: Yep. That sounds good. Okay. Let’s move on now to, what does Swarmer do? What is your niche today?
Sergey: Swarmer does swarming. It’s not about the story when you have 25 drones fly in the same direction, or approaching you, because it’s just a 25 separate drones. We do autonomy for coordinated robots. It means instant reaction and instant adaptation to changes, to changes of situation, to changes of environment, and to changes to tasks or goals. It means you have five drones approaching your direction. You’ll shoot first. Otherwise, it immediately distributes across or around you and hits you from different sides. This is what Swarmer means. And this is the simplest down to the bone explanation. Why is it so powerful and so scary at the same time? And why the Swarmer is a part of every single military doctrine of every single modern army work, United Kingdom, France, Germany, United States and Chinese guys. Because it makes the coordination and reaction on a scale that is impossible for human operators. You can scale it up infinitely for five drones, 100 drones, 500 drones. It’s always done by computers.
Jim: I read some material about Swarmer, found a little bit out there, not a lot. It sounds like at the current state of the art, you guys are good for 10, 15, something like that. Help us understand how the human and the technology operate together. Clearly, there’s no real leverage in the pilot just steering 10 different drones really quickly. There’s got to be some meta software in between that takes high-level direction from the humans, and then coordinates the drones directly. Is that approximately correct?
Sergey: Absolutely. And this is the key difference, and the key disruption we bring right now to the front line. Because we encourage pilots to turn to be operators. So you don’t work with every single drone anymore, you just work with the goals, with targets, landing zones, priorities, et cetera, et cetera. And at the end of the day, you just task the swarm as a group with some work. Okay, you need to bring this payload to this location, or you need to make some intelligence on this area. And at the end of the day, you even don’t know as an operator which drone will execute that task. That’s the key difference.
It takes a lot of things to be integrated to work together. And two important points. We keep in mind that we stand still from the day zero. We have humans in the loop for engagement. So the system can cancel the engagement. If it decides that it’s unreachable, the system can change the drone, or select another way to execute the task. But the engagement itself should be always confirmed by humans. This is one of the key points here. How do we cope with the AI, versus a dangerous situation?
Jim: Does that include in drone to drone combat? It seems that would be a huge negative, if you had to have the human in the loop for a drone to drone dogfight, that could happen very fast.
Sergey: Yeah. And even, imagine we have a swarm versus swarm.
Jim: Exactly.
Sergey: Like 100 drones versus 100 drones.
Jim: That’ll happen within one year. I guarantee it, right? So let’s talk about that. Are there going to be different rules of engagement for drone to drone combat?
Sergey: Yes. And this is the answer we chase right now. To some extent the human in the loop works, but then AI can do it way faster, and it means way more efficient. And at the end of the day, the decision-making is done by artificial intelligence. It’s done in the proper way. It’s faster, clearer, has way less mistakes, and can be explained. Because with humans, there is always a place for things like, oh, guys from that part of that forest just struck us with mortars. All hands in, destroy this forest for any cost, just because I’m so angry for them. For AI, that’s a totally different story. And AI that is built in the proper way can unwind every decision to every single reasoning there.
But getting back to your question, they would be a kill box idea, and a kill box approach that is being used in all the NATO standards, that’s being used on the battlefield. Like you set the region. For a dogfight, that would be a 3D part of the airspace. You set this region, and you as a human are responsible for saying yes, everything inside this region is an enemy, is a threat, and it can be destroyed. And you are responsible for any kind of collateral damage, as a human making this decision. And after it inside, the skill box, the system can operate on its own, and be as efficient as it can be.
Jim: I like that. That’s an interesting middle ground. And then you pointed out that many of the atrocities in modern war, at least between advanced countries, have been as a reaction to an ambush or something, where people’s emotions are out of control. Like the famous My Lai massacre in Vietnam. That company had been ambushed by civilians a day or two before. And so they were all keyed up, and they were mad, and they slaughtered 150 civilians in a very horrendous fashion, but AIs don’t get mad. That’s an interesting thing.
They do not have emotions, at least as long as we’re careful not to put emotions into them, and they can be coldly analytical. They don’t remember that these assholes were shooting at them yesterday. And if they do remember, they don’t care. They know what the rules are, and they apply the rules. I think I take the point that if the software is well-designed and the sensors are well-designed, it’s quite possible that we’ll see less war crimes with automation. As long as you have clever ideas, like defining the kill box and saying, “Mr. Drone, you can’t shoot anything outside the kill box.”
Sergey: Yes. And at the end of the day there is always a human who is responsible for [inaudible 00:16:27] this kill box, so we still keep it in track. But that’s a good point. We even had a discussion during the first fundraising campaign, that to some extent with AI, the efficiency of the operations will increase dramatically. It means to some extent, we as an AI company targeted to improve the efficiency of the [inaudible 00:16:52], we’re making warfare greener, and having less carbon footprint. How about that?
Jim: Yeah. That sounds like a good excuse as any, right? Yeah, truth, I don’t give a shit. I think in war you do what you’ve got to do. Think about the allies in World War II. Americans killed two and a half million Japanese civilians by burning their cities to the ground even before we used our nukes on them. So war is ugly.
Sergey: Yeah. War is ugly.
Jim: And yes, we should try hard to minimize civilian casualties, but at the end of the day, winning is the first objective. So I hope you guys keep that in mind. I’m sure you do.
Sergey: Absolutely. That’s the P zero. And you know, there are many differences between defending democracy, and playing for the autocracy side. Because there are a lot of things we care of, but Russians actually have no clue. There’s a fun fact, and a fun story about making battlefields greener, or less carbon footprint. But on the other hand, I live near Kiev, and we have many occasions over the last two years when some Russian missile was shot down nearby our house. And when it happens, you have a lot of toxic things just fall onto your head. And that’s a pretty unpleasant experience. And especially, I have three kids, so every single time you’re just getting angrier, and more angrier and angrier. Coming back to the team and say, “We have to move faster, guys. We have to move faster, or you have to stop it immediately.”
Jim: Exactly, yeah. And to my mind, it’s still almost unbelievable that here it is, 2024, and a modern country like Russia is engaging in this kind of barbarism. I’m like, “What the hell?” It is what it is, right?
Sergey: It is what it is. Yeah. That’s a huge difference in perception to some extent, probably to some [inaudible 00:18:51] extent. When I’ve been there on the London Drone Conference probably a month ago, I’ve heard a lot of how western authorities threaten Russia. Like Putin cannot win the war, because he has not the military capability to achieve that. On the other hand, he cannot lose the war, because why? Because he would have to explain to his people why he failed, why he spent such an enormous amount of money for this. Like, “Guys, you just don’t have a clue about what was given.” He doesn’t plan to explain anything, just don’t give a shit about his people. That’s so different in terms of the culture. So-
Jim: If you look at the history of Russia, the leaders that lose wars often get their throats cut. So that’s certainly got to be on his mind too. It’s a messed up situation. It’s obvious to how we’ll get out, but you guys will keep doing what you do. Now, a question about the details. Well, a couple of comments about Swarmer. The way you described it, it’s essentially moving the operator from being an infantryman to being a platoon leader, it sounds like to me, right? Because you think an infantryman is actually marching, and shooting, and aiming, and pulling the trigger. A platoon leader basically tells his sergeants, “Go take that hill.” And then they organize the individual soldiers to go take that hill. Is that a fair analogy?
Sergey: Pretty fair. And additionally, the system can even get you the feedback work. Okay. We don’t want to take that hill right now. Let’s do it in two hours, because we know from the last two weeks on this particular part of the front line, that there is a German system that will be switched off by Russians for some maintenance. So let’s wait for two hours from… Because that’s AI. It analyzes data, every single second. This is a final situation we faced before with the very first combat applications. When you put the system to the hands of the pilots, we were confronted by them. We’re like, okay, so now the drone flies by itself. It means you don’t need us anymore. So our commanders will put us to trenches. So now guys, there’s a different story. We empower you to use more drones at the same time to command 5, 10, 50 drones, and to do it way more efficient than they do it before. Okay. Okay, but anyway, we’re still afraid that they will put us to the trenches.
Jim: Understandable. Flying a drone from a storage container’s a hell of a lot safer and more enjoyable than being out in the mud-filled trench, being shot at by mortars and heavy artillery. That’s for sure.
Sergey: Absolutely. And we face the same experience from pilots that, when they are being deployed pretty close to the front line, they’re highly efficient. The most efficient pilots are being hunted by Russians. And they even don’t care about hitting them with the most expensive missiles and rockets they have, because they’re so painful for them. So they will pay anything to get rid of them. It means our intention here is to move humans out of the battlefield, and to help pilots to work with the system work even from Kiev, from Carpathian, from the other side of the world.
Jim: That’s actually an interesting thought. Presumably, like almost every human endeavor, there’s the statistical distribution of ability. Are there drone pilots that are the equivalent of fighter pilot aces, that are just so much better than anybody else that it’s almost like they’re not the same species?
Sergey: Yes. And first of all, to some extent, this is absolutely true, because from our experience, it takes 10 minutes to onboard the newbie pilot. They just reiterated the pilot’s code to our system. And it takes us an unbelievable three hours to onboard the professional pilot, because he or she, they have just had an amazing experience like, oh, what if? What if? What if? What if this? What if German… What if it will fail? And so on, and so on. And on the other hand, their self-confidence is pretty like that. So okay, we’re ace. We’re like, we don’t want to work together. We don’t want to touch, or be highly professional, skilled pilots just get the heck out of it.
Jim: Yeah, like the military, often people who train American soldiers have told me, that someone who does not know how to shoot a rifle will actually learn how to shoot a military rifle faster than an experienced hunter, who thinks they know everything and doesn’t want to listen to the drill sergeant, right?
Sergey: Correct. Correct.
Jim: Let’s go back to the army platoon analogy. And now in an army platoon, there’s different roles that the different people have. There’s riflemen, but there’s also machine gunners, there’s grenadiers that have grenade launchers, guys that have anti-tank missiles, there’s radio men. There’s a whole series of what they call Military Occupational Specialties, MOS in the American Army. In your concept of the swarm, are all the drones the same? Or are there mission differentiated drone types within your swarms?
Sergey: There are two areas of this perspective. The first one, every single drone is a dedicated robot with its capabilities, with its current status, and estimations for the success rate of different tasks. So you can test bomber drones with intelligence tasks. For some cases, it will be pretty efficient. And the second layer is the current goals, priorities, and the missions for the operation for the current approach. So for instance, you have a heavy bombing drone that is quite slow. On the other hand, as soon as it just dropped all the payload and you need a critical intelligence on the area nearby, it has capability. It has better [inaudible 00:24:45]. It can be turned to the intelligence, to the spectator role. And in terms of the AI in contrary to people, there is a one very significant improvement over humans, over us. It learns instantly. And when you get some new version of the software, some new role or some new ability, you just scale it in the blink of an eye. And contrary to human pilots, you have to spend two months to train them properly.
Jim: Yeah. And this is something of course that applies to all AI. Once one AI knows it, all the rest can know it in five minutes. You can just spread the learning data out very, very quickly to the whole army. It’s something probably most Americans don’t know, but at least what I’ve read, the number of drones that the Ukrainians are kicking out is tremendous. It’s like 60,000 a month. Is that true, that they’re bringing 60,000 a month, new drones? And they’re aiming for 80,000 a month. That’s a million drones a year, which is a staggering number.
Sergey: Absolutely. And there is a catch here. So I really believe, knowing the numbers, that during 2024, Ukraine and allies will deliver probably up to a million and a half drones to the front line. It means if every single drone will hit its target, we will win the war in a week.
Jim: Yeah, I talked about that with Sam O’Boria, the idea of the kill ratio. Once I did my research, and I saw that the Ukrainians were going to deploy a million drones a year, I said, “Shit, if they killed one Russian for every three drones, the war would be over in months.” To your point, if one drone killed one Russian, the war would be over in a week. They would just turn tail and go home. Everybody would just drop their rifles and run. I’m assuming that somebody in the military intelligence is calculating this kill ratio. Have you heard what the current kill ratio is?
Sergey: I cannot talk about the official numbers, but from the armed forces unit we work with for the highly skilled and pretty well-organized armed forces unit, it’s from 15% to 30%. You can make up the numbers for the unqualified units.
Jim: Let’s say it’s 5%. So if you have a million drones, you’re going to kill 50,000 Russians a year, which is not quite enough to cause them to run, but is a really high cost. If you could move that up by a factor of four or five, you break their will pretty quickly.
Sergey: Correct, and this is exactly our target. So when we talk about the efficiency, and probably, we believe this is what will happen for the next year. So there is still a huge commitment to bring millions of pretty inexpensive, cheapest dirt, off the shelf, and pretty simple drones to the front line. But we as an industry, as Ukraine, we start to empower them with artificial intelligence, with computer vision, with all the supplementary tools to make them way more efficient. So I think my stake is somewhere probably next spring. We will see the inflection point. When we will have way less drones, each single drone will be way more expensive, probably 2K to 5K US dollars, but the efficiency will not work. 5% or 10%, but it will be 90$, 95%. This will be a totally different story. And this is the inflection point, we should do anything to reach faster than the Russians.
Jim: Yeah. Yeah. Just like the atomic bomb in World War II, right? We thought the Germans were working on it. It turns out they weren’t, or were kind of incompetent at it. So let’s run the numbers here. $5,000 drone times a million. It’s only $5 billion. On the scale of things, not that much money. The US can easily write a check for an extra $5 billion a year. “Call your congressmen. Tell them to give them another $5 billion to give them a million $5,000 drones. Goddamn it.” So yeah, the numbers work. If you could get a million drones a year, $5,000 each, you get the pilot bottleneck by your swarm software, and you’ve got the kill ratio up to 20%, Russkies got to go home. They lose, right?
Sergey: Yes. And there is another factor for go home, even the psychological one, because when you see five drones approaching your location, and this is not just five drones flying the same direction, but it acts like a coordinated group. Holy shit.
Jim: Yeah. I’ve seen some of these drone footage, as just individual Russian soldiers who freak out. And sometimes they just stop and give up, because they know they can’t hide and they just go, “Oh, well. I guess I’m going to die.” As to your point, if you had a hunter, killer swarm that was focused on them, they’d totally freak out. So this is quite interesting. So let’s turn that around a little bit. What do you know about Russian capacities in this domain?
Sergey: I’ve heard about several programs. They develop and move the whole industry, and whole application in the same direction. It will be a huge mistake to assess them as dummy busters, because they’re not dumb. They’re pretty clever. And the enemy is really, really clever, really, really fast. And the key difference here between us and them is, how do we operate in terms of the team’s structure, and the ability to move fast. So at the end of the day, Ukraine is by nature, by our spirit a very, very democratic country. It means we are open to cooperate. We are open to innovate, and to scale it pretty fast. And fortunately for us, Russia is the pretty vertically integrated structured society. And the same applies for army, and multiplies even many times for the army. This is the natural limit factor for them to scale the innovation, but they have a really, really huge advantage there. As soon as they have something that works, they can just by order of command, scale it tremendously. It means again, we have to move really faster, way faster than they do.
Jim: Interesting. Yeah, that makes sense, because they still have brute force industrial capacity. If Putin says, “Make 5 million of these,” they’ll make 5 million of them, right? Now-
Sergey: Even if they will still work 4 million, or have money for 4 million, so they will just literally make another million of them. Correct.
Jim: All right. Let’s move to a little bit more technical area here, which is, what kind of data sets did you use to train these drones? And maybe talk a little bit about in general terms, the mix between symbolic AI, which is where you program in rules and things, versus machine learning, where you learn either from data. Or I’m also imagining that this would be a good application for simulators, where you run simulators. Because in the self-driving car arena, for instance, 99% of the miles driven by the AIs are in simulators, not on the actual road. Though of course, the hardest problems are on the road, so you need a mix of both. Talk to me a little bit about the data sets, and the classes of AI that you’re using in your work.
Sergey: Well, we’re stepping into a little bit like a great [inaudible 00:32:10] here, so let me be pretty cautious. On the one hand, yeah, we use a lot of data from different sources. Most of them work open source data to some extent. And we work with different kinds of simulators to replicate the data from the battlefield. Not to use the direct data from combat flights, because it’s again, gray zone. And the approach we follow here to build the AI is, on the hand, we build models. But I have personal experience with artificial intelligence for the last 15 years. So I’ve seen a lot. We’ve built a lot of different systems, and there are always a lot of corner cases. And we follow the same approach for autonomy as fortunately been developed for the self-driving cars. So there are several levels of autonomy, several levels of risk assessment for that.
But this is life and mission-critical at the same time, having the AI on the front line. So we have two levels of decision-making architecture. The first one is, there is a model that is being trained on the data, on the simulators or other data they scrape. I’m trying to be pretty cautious here, because the [inaudible 00:33:28]. Sorry, Jim. And on top of it, it’s been engaged in the set of strict rules, like the kill box approach here I’ve described before, but this is like the cage for the decisions the model can make. It means it will not do anything pretty unexpected, because one of the interesting trade-offs we faced is that trade-off between the efficiency of the decisions of the operation point, and the explainability, and the moment that human operators should be able to easily understand what’s going on, and what will happen next.
So the approach which is right now, it should be clear for humans it’s more important then to be very efficient. We will get back to this probably next year or two years when we will be ready. But the AI we build again, as a [inaudible 00:34:22] and mission-critical system at the same time, it should earn trust of humans who use it.
Jim: Yeah. That would strike me as probably pointing more towards symbolic approaches, and a little bit less towards deep neural nets. Deep neural nets being essentially opaque black boxes at this point, while systems-
Sergey: Absolutely.
Jim: While systems… If I was doing this, I’d probably do it with evolutionary symbolic AI. That’s what I would probably use.
Sergey: The first one who will bring ChatGPT to the battlefield, I will shoot him by myself, because it’s a really, really bad idea for the moment.
Jim: Yeah. Yeah. I understand where you’re coming from. The other thing I discovered when I was doing a little research on you guys is that you have been very aggressive at integrating other software in. Some of the ones my analysis says that you have used or Palantir, Shield.AI, Rheinmetalls work, and Avator.AI. How much have you used other parties? And I’m also interested, to what degree have you been able to find useful things in the open source world?
Sergey: Let me answer you in this way. So the approach we follow here is first of all, we try to not to reinvent everything. We focus on swarming of the software, and on the AI for this, for swarming, we try to utilize all the third party providers that speeds us up. So we use different solutions from different vendors as building bricks. And we want to be the best building brick for the whole ecosystem. So to provide as many value, not only to the operator and to the drone’s form, but this army, there is a huge ecosystem of information flow of the ISR intelligence [inaudible 00:36:02], analytics, et cetera, et cetera. And you have to provide information and value to these systems as well, to be an integral part of the whole puzzle.
So this is it. And open source, that’s an interesting story because on the one hand, there were a lot of civilian and open source solutions and technologies that look promising, but you cannot apply them to the battlefield, because you just cannot use the model trained on the cats and photos from the internet for the last two decades to apply to forests, tanks, and camouflage. It means there are some low-hanging fruits, but mostly, many of the things should be done completely from scratch.
Jim: That makes sense. Yeah. It is a different problem, but it’s a similar problem, which has got to be agonizing when you make these decisions, because if you find something off the shelf you can use, it reduces your risk and speeds up things a lot. But if it’s an inappropriate use, then it degrades your capacity.
Sergey: Correct. And again, building the company to defend democracy, we cannot just violate the license agreements, the restrictions, et cetera, et cetera. That’s the point. We are pretty strictly compliant with this one.
Jim: Now, I hope the vendors are being pretty liberal with you.
Sergey: That’s a different story, because we faced many situations when people told us okay, we cannot invest with a company that eventually kills people. Or we cannot share our technology, we have some corporate rule, et cetera, et cetera, that we cannot harm people in any way. Okay. Okay. I want to raise up the topic that to some extent I confront the point that Russians are people for this moment, but-
Jim: Got you. Yeah, I noticed that you had D-Three [inaudible 00:37:56] as an investor. They’re a very interesting investment firm, and they are basically focused on military applications. Have they been good to work with?
Sergey: Absolutely. And I am really grateful for them, that they were the first serious fund that started to invest to the defense technology in Ukraine, and they were brave enough to jump in the flight, and then on the train to visit Kiev first. And that was a very good sign for the different investors, because the whole ecosystem is pretty… That simply can be that this is an Independence Day right now, because there is another parallel. The whole defense ecosystem in Ukraine, it’s pretty close to what happened in engineering in the United States in 30 years. Because there are a lot of engineers, a lot of bright ideas, and a very little amount of people who know how to make a sustainable to some extent, business, long term stories out of it. And we need this expertise. We need these people, we need the investment expertise, advisors and everything to move fast, and to organize it, just to shape the industry.
Jim: Let’s go back a little bit to, my background is in complexity science, basically. And so I’m always interested in the idea of collective intelligence. When you think about a swarm of drones, and we talked earlier, they could have different roles. Your software can understand a surveillance drone versus a bomber, versus a suicide drone, et cetera, and you can orchestrate them. How do you think about designing your software and your systems such that the whole is greater than the sum of the parts? So that you’re actually able to create a collective intelligence that when you take the gestalt of what all the drones are thinking, plus the human, you get to an exceptional level of performance.
Sergey: Jim, that’s pretty straightforward. We just put an ant to every single drone, and allow them to control the drone behavior. That’s it.
Jim: No.
Sergey: They’re doing this.
Jim: No, that’s a complexity science joke, folks.
Sergey: Correct. Correct. Let me give you one example, that just probably will be pretty clear to our listeners. So first of all, you don’t have a precise and clear information about the environment and around the targets you have. That’s always controversial for the frontline. The second one, you don’t have a clear and constant communication between parts of the swarm, between different entities. So imagine you and me are the parts of the group. We’ve just got the engagement command out of the top, out of some human who sits there and controls us, et cetera, et cetera. And for instance, we just got jammed, so we are disconnected from each other. What we do, for example we have a task to hit the tank. I start to calculate, like, okay, for me, I have the appropriate payload. And for this tank, the appropriate bomb and I can hit it.
And keep in mind, a lot of factors, like battery life, like distance to tank, like probability of heat by itself, I think that the confidence number is like 0.75. And I do the same question for you in being disconnected for me, because I know your location, I know the latest space for you, and I’ve just decided that you have for instance, 0.95 probability. Okay, so my decision is pretty clear. I will not do it. You will do it. And you do the same at the same time. So we both have the same conclusion. And you just fly and hit that [inaudible 00:41:33] tank. Then we got out of the German area, getting synchronized again, and everything works as expected. And what’s the power of collective intelligence or decision systems here? Is the AI can do this on a scale, so it can calculate the same approach. It can calculate the same numbers for this form of 200 drones, because you have the exact information, and they made a snapshot of everyone.
For humans, that’s impossible. This is one of the examples. And they think I can illustrate a little bit what’s the balance between the behavior. On top of it, you have some templates, some roles and some behaviors. Again, you can put some really unclear for humans out of the black magic ChatGPT box behavior there, but we stick to the patterns. So there are templates and patterns of different operations, because you don’t have one of the field warrior operations at the front end. You always have neighbors. You always have different forces, different armies. You have to be aligned, and they have to understand what will happen next. And therefore, what to expect from your swarm and from your decisions, and what not to expect. So that’s why we stick to templates, and to basic operation flows.
Jim: Interesting. Now, do the drones communicate to each other? Or do they alter? So that’s even better. So when you get a communications breakdown, they can continue to execute their mission even if the center is no longer working for whatever reason?
Sergey: Correct. Correct.
Jim: Now you’re describing some pretty powerful processing. This sounds like more processing than you could do on a Raspberry Pi or something. What kind of computing devices do you need on these drones to be able to do this kind of computation?
Sergey: It’s funny, but Raspberry Pi 4 works pretty amazing. And there is a different story, because that’s military. Out of the several thousands of RPIs we deploy to the front line, we have only two more functions. Everything else was glitching, like the power lines, the AC-DC converters, communication systems, props, motors. But these little bastards, they’re virtually indestructible. So that’s even better than industrial computer standards. We just love them.
Jim: Okay. So Raspberry Pi is good enough. All right, people. That’s interesting to know.
Sergey: Really, really looking forward to get the first compute model 5, because it’s way more powerful. For the [inaudible 00:44:01] factor, it will be a great addition. Of course, there are different other computing systems used for all scales from PI zero to Jetsons and even to modern x86 systems like K-7 based. But Raspberry Pi is our workhorse. It’s just an amazing piece of hardware.
Jim: That’s interesting to know, because it’s also in the American hobby robotics space. It’s also by far the first thing. I actually had a little R&D thing to design a robot to defend gardens against deer. How about that? We used Raspberry Pi on that, called it Garden Guardian. It probably would’ve worked as a business, but it wasn’t big enough to be worth chasing. Let’s see, a couple of details. When again, you think about arms race, one of the ways you fight an arm’s race in war, is through decoys and concealment. How do you deal with, like in World War II, the Americans and the English had their fake army that faked Hitler out? Made them think we were going to invade at [inaudible 00:45:05], when actually we were going to invade at Normandy. They had inflatable tanks, and wooden cannons and things of that sort. How do you deal with the Russians using decoys, if that’s become a problem yet?
Sergey: Fortunately with the latest technology advancement, that’s a way less significant problem, because you with radars, with synthetic aperture radars, with satellites that work in different ranges, it’s really, really tough to make a decoy that has some strategic importance, and that will be indifferentiable by intelligence from the real, for the real [inaudible 00:45:47], or for the real tank formation. I believe that for Russians it’s easier to send the actual tank there, or actual missile system. But for the tactical level, that’s another point.
That’s why I don’t want to cover all the details, but there are a set of countermeasures for that. And one of the points is in contrary to a worldwide war. We have fortunately different sensors based on different technologies, and different perception ways to equip drones with. So we can to some extent differentiate the decoy. And that’s one of the interesting, by the way, difference between the technology and military technology. Because for civilian technology, for facial recognition system, there are a lot of [inaudible 00:46:35] or tags for the current models. So you can just put a sticker with a QR code on your end, so the system just doesn’t see it. And so you have to deal with this with the military system by design.
Jim: Yeah. You have to assume the other guy is going to be aggressively trying to counter your system all the time, right? It’s a true arms race. In the facial recognition, yeah, there’s corner cases where people do that, or get those hats that have the little special reflector brows on them and stuff. But in warfare, you assume it’s going on all the time. Now, the other one that, particularly early in the war, I used to watch a lot of the open source videos of Ukrainian drones bombing Russian vehicles and stuff. And one thing that I noticed is, the Russians never deployed smoke. Now, I remember in the Cold War, the Russians were famous for having smoke as a big part of their tactical doctrine. All their tanks had smoke generators on them, and stuff like that. Have the Russians stopped using smoke as a countermeasure? And if not, how do you deal with it?
Sergey: They still use it a lot, especially during when they move something in groups. But again, the technologies is totally different right now. When you have an infrared camera, an infrared scope in every single [inaudible 00:47:55] right now, smoke is not so efficient. So that’s just a pretty straightforward answer.
Jim: Yeah, so basically it’s in the sensor technology where you combat both the drones, and things like decoys and things like smoke, where you use multiple, you use magnetic, you use radar, you use infrared. And then you synthesize all those signals and make a judgment, who is where basically, right? That’s very interesting. All right, so how about, maybe this is speculation a little bit. What do you see happening over the next year or two in the drone arms race, in the air, in the Ukraine-Russian situation, assuming you guys don’t crush the Russkies next spring, which I hope you do?
Sergey: But even if we will do, I really hope we will do it early and do everything to that. So a little bit, what’s happening right now worldwide is the new arms race just started. And I think the next decade will be all about this, starting from the nuclear weaponry, revision of the state-of-the-art technologies. And it sounds like there is nothing left in the arsenals to the drone, to the cyber technologies, to everything. I think switching off to the more strategic view, first of all, for sure, for sure, we will see a lot of drones that is fully autonomous. So don’t rely on any kind of GPS signals, don’t rely on any kind of communication. So you just task the drone with something. For instance, you will have a seven inches drone in your backpack. You can take it off your backpack. Say to it, “I have a tank 250 meters to the south from my location. Hit it.” And just drop it to the air. It will just execute the task.
The same for multi drone weaponry, but that’s all tactical things. One of the things that we’re talking about, what’s the easiest way to win the war when you have an army of drones, and your enemy has an army of drones? The easiest way is to get the enemy’s army of drones under your control, and turn it against them. That’s the point. So the cyber security there, and the information security will be the next big thing. I don’t talk about laser weapons, I talk about kinematic weapons for that. That’s all just pretty, pretty straightforward. But this is the point when, as soon as we apply more models, as soon as we apply more things that are being built collaboratively, the more contributors you have to some AI model, the more chance you have that there will be some backdoor, or some malware hidden inside it, or inside your deployment system. So that’s way, way more about the cyber security.
Jim: Of course, Ukrainians have always been famous for their hackers, right? So I’m sure you guys have got some… Back in the civilian days, I was involved in internet security, not some of my companies. And fairly often, the bad guys were Ukrainians, and hired by the Russians oftentimes. So I’m sure you guys have got some good hackers out there someplace.
Sergey: Sorry for that. Sorry for that, Jim. Yeah, but this is exactly the reason why our CTO has really strong CyberSec and InfoSec background, because we have to deal with this from the very beginning.
Jim: Yeah, your own bad guys basically, right? And one’s hired by the Russians. Yeah, the Ukrainian GRU, they were up to no good too about 15 years ago. I don’t know about today, but anyway, neither here nor there. So let’s move on to our last topic, which is, hopefully you won’t have to be using your talents strictly for warfare. Do you see civilian applications to swarm approaches to drones?
Sergey: Sure. And this is the pretty interested and important topic, because first of all, we have a couple of projects, even right now outside of the military application. There’s public security and for disaster recovery application, because where swarming shines is not when you just have something like border control, or electricity line inspections, et cetera, et cetera. There’s just a pre-programmed route for single or 10 drones, and you don’t care about this. But when you have to react on the evolution, or to devolution of the situation. So disaster recovery as it escalates and de-escalates. Public security, when you have some shit happened on the stadium or on some public event, and you have to deploy more drones to help, and more other robots, not only aerial drones, and even to manage people force and ground security personnel in an efficient way, because this system can ingest a huge amount of information and react appropriately way better than humans. And then scale it down after the critical point. So this is it.
Jim: Wow. Okay. This is going to be interesting. Ukraine ought to become a major military exporter after this shit show is over, assuming you defeat the orcs, but then will also be hopefully some peaceful business to be had as well.
Sergey: Exactly. I really, really hope that eventually, at least for my company, the civilian part of the business will become way larger than the military one. It means we will get back to normal, worldwide.
Jim: All righty. Well, I want to thank Sergey Kuprienko, CEO and co-founder of Swarmer for an extremely interesting conversation today about drones, drone warfare, swarms, collective intelligence, and more. Thanks again.
Sergey: Thank you, Jim. It was really a pleasure for me today, and thank you so much.
Jim: It was great.