When AI Teaches Cars to Feel the Road
Can a car feel the road beneath its wheels—and should it? In this episode of AI Experience, we explore how artificial intelligence is enabling vehicles to sense their environment in an entirely new way. Boaz Mizrachi, co-founder and CTO of Tactile Mobility, explains how AI-powered virtual sensors can estimate grip, friction, and road conditions—without adding any new hardware. Drawing on 30+ years of experience in signal processing and system design, Boaz reveals how this technology mimics the instincts of a professional driver and why visual data alone isn’t enough for autonomous vehicles. We also discuss the limits of generative AI in edge computing, how tactile sensing can close the final 1% gap in safety, and what it takes to train AI models on real-world driving data collected across all conditions—from icy Canadian roads to dusty summer highways. If you’re curious about the next frontier of mobility and the role AI will play in making it safer, smarter, and more responsive, this episode is a must-listen.

Boaz Mizrachi is a seasoned technologist and entrepreneur with over 30 years of experience in signal processing, algorithm development, and system design in the automotive and networking industries. As a Co-Founder and Head of Engineering at Charlotte’s Web Networks, a leading high-speed networking equipment company (acquired by MRV Communications), and as the Head of the System Design Group at Zoran Microelectronics (acquired by CSR), Boaz demonstrated hands-on leadership and technological expertise. Boaz holds MSc and BSc degrees in Electrical Engineering from the Technion—Israel Institute of Technology, where he also served as a lecturer. Today, he continues to guide the next generation of innovators as a project mentor.

Boaz Mizrachi
CTO
Julien Redelsperger Boaz Mithari is the founder and CTO of Tactile Mobility, a company that enables vehicles to perceive road and driving conditions through software-based virtual sensors. So today, we will discuss the future of mobility and the role of AI in virtual sensing technology. Thank you for joining me today. How are you, Bouaz?
Boaz Mizrachi Great. Thank you for having me on your podcast.
Julien Redelsperger My pleasure. So driving, AI, mobility, we're going to talk about autonomous vehicles as well. Lots of interesting things. But just to get started, could you please tell us exactly what is a virtual sensor in a car and what is an AI-powered virtual sensor?
Boaz Mizrachi Right. So we all know that we have sensors in our vehicles for many years, like wheel speed sensor, acceleration sensor, temperature sensor, etc. These are hardware sensors, okay? These are physical elements that convert some physical phenomena into some signal, either voltage, current, whatever, and we read it and we get some indication about this value that we are measuring. Okay? A virtual sensor is a software-based sensor, meaning that we are reading the existing data or outcomes that we have in the vehicle from the existing hardware sensors, and we are generating information or an insight that does not exist there originally. For example, in most of the cars, like passenger cars, we don't have a weight sensor, okay? However, if we run some algorithms and try to deduct or derive the weight out of the information that we receive, for example, from the speed of the car, acceleration, fuel consumption, torques, etc., then we can derive the current vehicle load or vehicle static weight. So, in this case, we don't have a hardware sensor for weight estimation, but we do have a software that tries to estimate the weight of the car, which we call a virtual sensor. Now, AI, a virtual sensor, is something that has to do with data collection, processing, training, inference, etc. The term AI, artificial intelligence, is widely used, and, you know, starting from very simple processing of the data and up to some very sophisticated ones, you can do training of your models, you can do inference using the models that you have trained. Any of them, if you use it in the vehicle, any part of it is considered to be an AI-based virtual sensor.
Julien Redelsperger Okay, that's interesting. And how long have you been working into AI virtual sensors? Is it new or is it like on the market for a long time?
Boaz Mizrachi So it's not in the market for a long time. I think, I would guess in the past three or four years, you're starting it in the market. People started working about that years ago, but it takes time to mature. So that's the situation that we have today. Okay. And so, to traditional sensors like cameras and LIDAR, we know they can see the road, but you, for virtual sensors, you claim to make the vehicles feel the road. So what does it mean and how does AI help translate raw mechanical data into intelligence for vehicles? Right. So not just the LIDAR, camera, infrared, etc. These are all virtual sensors or hardware-based sensors that provide pixels as output. The pixels may stand for depth, may stand for color, may stand for heat, etc. But it all is like a visual precipitation, a perception of the road, or the environment around us. Now if you look at other hardware sensors that we have normally in the chassis, like acceleration, wheel speed, calculated torques, etc., then these sensors pick up signals other than visual signals. That has to do with the vehicle and road dynamics or interactions between them. Now if you are an expert driver, then you can feel it while you're driving the car. You can feel your grip of the vehicle on the road. You can feel your tire situation, tire condition. You can feel the road, slippery or not, etc. You can feel your suspension system. Now you don't see it, but you feel it. So in Tactile Mobility, what we did is that we took those chassis-related sensors, not the visual ones, and we are trying to analyze it as if we were an expert driver, and then convert what we see in the signals into a feeling that the driver would have felt if we drove the car. And when this information is very sophisticated to analyze, you will need the AI to help you in that, and I can give you one example. Let's take, for example, a friction estimation. In order to derive the current friction potential or available grip that I have with my vehicle at this specific moment, normally in the industry you will look at the signals of the force or the torque of the wheel versus the slip. And if you look at the tire model that virtualized this force versus slip information for that specific vehicle at a specific time, then you can identify, if you can identify micro-slips and micro-forces, the correlation will provide you the available grip. The problem is that in order to analyze it, you need a lot of missing coefficients for the physical modeling that you try to solve here for the tire, for the suspension, for the road, etc. And in order to solve it, it's either you're doing tons of experiments in a controlled environment, which of course you cannot in a production car driving around, but if you are using AI to try and solving those equations, then you might have, as we call in the industry, a digital twin or a model-based vehicle that you can use and generate those insights, for example, the current grip potential that I have now. So we have a calibrating software running in parallel to the main software that is constantly updating all the coefficients and calibrating my equations now for my vehicle status. And on top of that, the other part of the software is using in order to inference or solve those questions and provide us those estimations.
Julien Redelsperger Okay. We've been talking a lot about AI since ChargedGPT November 2022, OpenAI launched ChargedGPT, and this is the true launch of generative AI. Does generative AI have anything to do with the way you use AI or were you working in AI way before ChargedGPT was launched?
Boaz Mizrachi Yeah. So the answer is that we were working many years before that. And again, to my humble opinion, I don't think it has to do with what we do, and I can tell you why. Because the search space or the signal space that we are trying to solve here are relatively low magnitude, okay? The search space is relatively low in compare to search space that you have in LLM or other big data. And also the resources that you have in the vehicle for solving those problems are very limited resources. We are talking about tens or hundreds of kilobytes of memory. This is what you have. But the signal bandwidth is also very low. We are talking about CAN bus signals that are a couple of thousands of messages per second. So it's not much. But also the resources are not much. So this means that the tools that you can use for that are lean tools and very low cost and efficient tools that are not for generative AI. Okay. But still your model needs to be trained.
Julien Redelsperger So what are the key challenges in training AI model for tactile sensing?
Boaz Mizrachi Right. So the challenges, if you... First let's define the target, okay, for this training. So the target is to be able to identify situations on the road of the surface and on my chassis, for example, my tire situation, out of what we feel from the car. Okay? This is like a professional driver can tell that your tires are completely worn out. Okay? When compared to new tires. They can tell that your tires can now handle the water, okay, the puddle or the wet surface I have on the road, rather than feel aquaplaning phenomena. Okay? Which I like can fly out of the road because of that. That's a professional driver. So if we want to identify ourselves, we need to train our models using the data that we collect and then, you know, according to some ground truths, provide a model that will identify those situations the same as the professional driver would. So the challenges here is that you need to get a lot, a lot of situations, recorded situations with the signals, okay, about such an event. Because there are a lot of examples of paved roads, okay, and unpaved roads, and a lot of situations of low mu, low friction, for example, snow versus ice versus the frozen roads, et cetera, versus dirt, okay, mud, whatever. So each phenomena will yield a different variety of signals on those hardware sensors that you collect from the vehicle. And you need to make sure that you have tried your algorithm on top of all of those signals, all of those situations, and then make sure it works. So you need to collect data from all over the world, from real-world situations, right? Not just in one proving ground. You need to collect the raw data. Raw data means the real-time data that runs on the canvas, okay, with all the signals inside, and you need to record it, and you need to upload it to the cloud in some way. And then you also need the ground truth for that. Okay, let's say that I'm recording for millions of vehicles driving around. How do I know what was the ground truth there? Okay, the signals show me it was a low mu. So it slips with the wheels, et cetera, but how do I know it's correct? How do I know if it's not an error because the tire is worn out? Okay, it's not that the road is problematic, the tire is problematic. How do I differentiate between the two? How do I know the ground truth? So the annotation of the collected data is one of the main challenges. And then, of course, the cost. We all need to stand with the cost. As I mentioned, we have lean resources, we have lean computation power, and we have lean budget in order to do that. Okay, I cannot have hundreds of test drivers driving all around the world collecting data for me. This will not happen. So those are the main challenges for collecting data, training AI models, and verifying them.
Julien Redelsperger Okay, interesting. And who are your clients? Do you work with, I don't know, car makers, manufacturers, any big brands?
Boaz Mizrachi Yeah, so we play as tier one and sometimes as tier two. So our customers are the direct OEM, the car manufacturers themselves, where they want to enhance customer functions in the vehicle. And we provide them these virtual sensors output and also other building blocks so they can, on top of that, develop their own customer functions. Just for example, if you would like to have an adaptive cruise control, adaptive cruise control that will select or precondition the distance with the car in front of you automatically according to the friction. If it's low, I will keep a larger distance with the car in front of me. So if we provide the grip estimation at every point of time, they can decide for the distance to keep with the car in front. So there is the sensor function, there's the customer function. So in this case, the customer is the OEM. So we have other partners, tier one partners, that are manufacturing components for vehicles or large components like ECUs, engine control units, and steering, etc. So in this case, we are embedding our software on those ECUs as a tier two company, and they provide it to the end customer, to the car manufacturers. So we have both. I'd like to talk about autonomous vehicle, because I believe this is probably the next revolution in terms of mobility. Could your solution be maybe a solution for democratizing autonomous vehicle? And can AI-powered tactile sensing be the missing piece that finally makes self-driving cars safe and reliable? Right. So when people ask me this question, I give an example that you can feel by yourself. So imagine yourself that you are driving a car remotely, okay? Like a video game. So you have your screens, you see what the car is seeing, okay? You have cameras in the car. The car is relaying the information, the video, to your office, and you have your pedals, and you have a steering wheel, okay? It's like a game, but it's not a game, it's real, okay? There is a real car driving in a real road with real cars around it, and you need to drive it remotely, okay? Let's say that you have zero delay, okay? Now will you drive the car?
Julien Redelsperger Not sure.
Boaz Mizrachi Not sure, okay. If you are younger, you will most probably, yes, I will. I will take the risk. If you are older, maybe you won't. Even if you are younger, and I will tell you that your mother is sitting on the backseat of the car, okay? You will say, "I'm not sure," okay? And then if you say, "I'm sure I will," say, "Okay, but let's say that it's rainy, okay? It's night, rainy, the car is slippery, the road is slippery, et cetera.” Now you say, "No, I will never do that." And when you try to think about it, why you will not do it is because visual is not enough to drive the car, okay? So if you rely the visual information, it's not enough for you to drive the car. You need to feel it. You need to sit in the car to feel it, excuse my French, to feel it in your butt, okay, with your hands, through the vibrations, through what happens to you in the car. So this is the missing piece. It might be 5% of the information that you need, maybe even 1%, but nobody will drive a car autonomously if there is a 1% of missing information, okay? 1% of chance that the car will fail because I will not feel the slippery road, because I will not feel that my tires are not feeling well, okay? This is the situation. So at the end of the day, tactile sensing is indeed a missing block for generating a fully safe autonomous car.
Julien Redelsperger
So you know I live in Canada when we have, let's say, four months of winter with snow, icy roads, complicated situation sometimes. Could your solution be helpful in those conditions and can they adapt in any kind of condition like sunny road, dusty road, or snowy and icy?
Boaz Mizrachi
So as I mentioned before, we need, and this is what we do, prove that our system, our algorithm and technology can work in any situation, okay? So any situation that the road condition, that in the chassis condition that the vehicle is involved in, the signals that will be collected there and be provided in real time to our software, okay, in the ECU, will make our algorithm decide the right decisions or provide the right perception of the condition. That's what we need to make sure. And this is what we are testing. We have millions and millions of kilometers recorded, okay, over the years, thousands of vehicles, different types of roads. By the way, we have also thousands of kilometers recorded in Canada in winter conditions. So any change, any improvement of our algorithm is automatically tested, regression test over those millions of kilometers.
Julien Redelsperger
So something I'd like to ask about would be, how do you proceed the data? Because you need to work in real time for safety critical applications. And how do you do that? Do you send data to the cloud from the car or is it processed in the car? And how do you manage the latency between the situations that is happening and the fact that the signal needs to be processed and analyzed and worked?
Boaz Mizrachi
Excellent question. You can imagine yourself that if we are depending on cloud communication, delays, reception, et cetera, it will never work. So the answer is that the algorithm and the software is fully contained in the ECU, working on board, edge computed. No need for the cloud. Yet, yet, if we have cloud communication, we do upload the results of the estimation. So for example, the road condition that we have estimated that that specific position on the cloud side, what we do is crowdsourcing. So we are collecting information from those millions of cars that have tactile inside with situational bits and some other metadata that helps us understand the situation, not just estimation by itself. And then we're building maps close to real time. We'll build maps of what we call surface DNA. So surface DNA are tactile layers of the road describing what the vehicle will feel on those roads. For example, what is the friction there? And those real time maps can be now monetized either through third party, for example, all the authorities, municipalities, they would like to get this information to maintain the road in Toronto or wherever you live. And then on the other side, maybe even the OEMs themselves, they would like to get this map like we call it in the industry, e-horizon. Electronic horizon or what's in front of my vehicle so I can precondition my chassis accordingly. So cloud communication is only an assist if you do have it. If you don't have it, you don't use it. Everything is done in the vehicle, in the AC. So even if you're driving your car in an area where you have no connection, no 5G network, nothing in the middle of nowhere, it still works. It's doing calibration for itself, training for itself, and at the end of the day, providing estimation for itself.
Julien Redelsperger
I'd like to mention the term hallucination. So for generative AI, hallucination is when the AI is creating fake information, data, name, places, et cetera. You aggregate data from multiple vehicles to map road conditions dynamically. How do you make sure your data and the AI is accurate? And how do you prevent one bad data point from corrupting the whole system? Do you have to deal with hallucinations with your system?
Boaz Mizrachi
So that's a major problem in our business. Major problem. On one hand, you need to support totally different situations that you would even not imagine that might happen when you are designing your system. Like future proof, let's call it. That's on one hand. On the other hand, there is a problem is that you cannot prove it. So how do I prove it? How do I train it? So what you need to do is somehow to control the outcomes of the AI. The way we do that is that we are using signal processing and physical modeling that are narrowing the search space. So if the search space-- I mean, at the end of the day, AI comes to solve a search problem. I have a huge space of situations that I need to search it and to provide the correct answer. Whether if I'm playing chess or whatever. There are a lot of things I can do. What is the best for me? So in our case, if you are narrowing the search space, and by doing some physical filtering on the signals that you get, narrowing that, and using some physical models that will narrow the search space, then the algorithm of the search will be more accurate. Then on top of that, you will have to have a final stage in the pipeline that will need to eliminate those situations that are with poor quality of estimation that you are not sure of. This is how you limit it from the input, and you bound it from the output, and you make sure that the algorithm will not make any nonsense.
Julien Redelsperger
Could this technology extend behind vehicles? For example, could AI-driven tactile sensing improve robot mobility, smart city infrastructure, or even wearable tech? Also, maybe train, trucks, planes?
Boaz Mizrachi So along the years, we have tried to see other fields that might use it. The answer is that you can do that. There are a lot of fields that can benefit out of that. Any system, any hardware system that consists of physical sensors, besides visual sensors, really physical sensors, that out of these sensors, you can derive additional insights on top of them that those sensors cannot bring by themselves. For example, weight estimation out of acceleration and force estimations, or force sensors, force and acceleration sensors. So in any case like that, tactile virtual sensing is something that will add additional layer of insights that the perception there will be much better, and those decisions and actuations of this machine, or whatever it is, if it's a robot or a train or whatever, will be more wiser to do. Yet, since this is so complicated, and we are a relatively small company, we need to focus. So today we are working on the vehicle industry only.
Julien Redelsperger So what is the future of mobility for you? Do you think AI will be able to make the first, I don't know, autonomous vehicle that could drive anywhere, everywhere, and with all type of road conditions? How do you see the future of mobility in the next five years?
Boaz Mizrachi So these are two different questions. The next five years and the autonomous vehicles. So when people talk about autonomous vehicles, they mostly think about level five, OK, which is just go to sleep and the car will bring you from point A to point B with no interventions, any weather condition, any road condition, with other passengers around, the car driving around, etc. This I think will not happen, at least in the next 10 or 20 years, because a lot of reasons, OK, a risk for that. Yet the industry will always strive to get there. We will always try to get closer to that. So today we are talking about level three, level three plus, level three minus, etc. So instead of a revolution, we're having an evolution, OK, that provides more and more knowledge to those systems, to the vehicle systems, and allows us to automate more and more of the functions of the vehicle. So I think that within the next five years, we will see definitely more automated functions in the vehicle, like valet parking. Valet parking is a good example. It requires a lot of knowledge. It requires some maybe external assistance. I don't know, like beacons or mapping those underground parking or whatever, dead reckoning. But at the end of the day, it's a relatively confined area. So as I refer to that, the search space is smaller. So you don't have a lot of uncontrolled situations in the parking lot, as you can have in the highway. So this is something that could be solved. And I think that an interesting function will be in, you know, premium cars, you will have a valet parking, you will park in front of the, I don't know, supermarket next to your neighborhood and the car will drive itself to the parking. You will summon it using your smartphone or whatever. It will come back and take you. I think this is a function that we may see in five years from now.
Julien Redelsperger Thank you so much, Boaz. So at the end of each episode, the guest answers a question posed by the previous guest. After that, you have the opportunity to ask a question for the next guest. So here's your question, courtesy of Ilya Rozman, who is an entrepreneur and AI expert. We can listen to his question right now.
Ilya Rozman
Artificial intelligence as an influencer to businesses, health care and other companies, if it's something that will be forever or it's something that will be a time frame that soon it will be over?
Boaz Mizrachi Interesting question. I didn't give it a thought. I mean, we are all today very excited to see those very rapid improvements of our day to day life and medical care, et cetera, out of those AI innovations. So we don't see what will be next. I assume that I'm not sure if it will be forever, but anyhow, the next revolution will rise out of this AI. So we need to think a lot about AI and what can come out of that in the next future. But definitely it will come out of that.
Julien Redelsperger Do you think AI will keep us safe, healthy, maybe make sure we are smarter? I don't know if you have any kids, for example, but do you think the kids will benefit from AI in the future?
Boaz Mizrachi Definitely they will. They benefit from that today. As we speak, we both benefit out of that and we will keep doing that. We just need to make sure that we will not take those outcomes for granted. Never believe AI automatically. Try to do your thinking on top of that, like fake AI and stuff like this. And try to take advantage of your spare time or your spare energy or resources that comes as an outcome of using AI. Those are the two giveaways.
Julien Redelsperger Thank you. So now what question would you like to pose for the next guest?
Boaz Mizrachi My question is something that bothers me a lot while developing our system, is how will you prove the robustness of your AI-driven solution? And when you are answering it, try to avoid statistical testing as an answer.
Julien Redelsperger Perfect. Thank you so much. Well, it's been an absolute pleasure speaking with you today. Thank you for joining me.
This transcription was generated by an artificial intelligence tool. It may not be 100% accurate and could contain errors and approximations.