Automotive Functional Safety for Sensor Fusion Systems
Autonomy and intelligence are two important factors that are driving the automotive industry forward. Both these features are enabled by aggregating data from the surroundings of a vehicle with various components like camera, LiDAR, RADAR, ultrasonic sensors, etc. to determine the next course of action for the vehicle.
Automotive systems such as ADAS effectively function based on the fusion of these sensors. Apart from the sensors, algorithms (such as object detection) running on the edge help the vehicle to be intelligent and autonomous.
But before we go any further, let’s try to understand how these sensors operate and help the vehicle to perceive the world around them in real time.
How Do These Sensors Help the Vehicle in Perceiving the Environment?
For a vehicle with any level of autonomy needs to constantly sense and understand the environment to effectively navigate through the traffic.
- The data is captured using camera, RADAR, and LiDAR functions. This helps to understand the objects in the vicinity, space available for movement, and future predictions. However, there are pros and cons to these sensors, as their working could be affected by extreme weather conditions.
- Capturing the data however is not enough, you need to segment them as well. The objects in the surroundings need to be segmented into categories such as people, cars, traffic infrastructure, etc.
- Once the data is segmented, you need to classify them as relevant or not and rule out those objects that may be irrelevant for spatial awareness. This helps the vehicle to navigate through the traffic without causing any accidents.
- Based on these classified surroundings the vehicle can predict the next action during the course of the journey.
However, sensor fusion poses serious challenges in implementing Automotive Functional Safety (FuSa). Let’s try to understand the reasons behind it.
Do you want to know about ASIL Classification? We have covered it here for you.
Why Does Sensor Fusion Pose Challenges for Automotive Functional Safety?
The main purpose of sensor fusion is to combine the strengths of the inputs from multiple sensors to overcome their individual weakness. Sensor fusion systems are often complex, and they can also be non-linear (involve trigonometry, divisions, multiplications, and transformations that distort the relation between an input and an out) in nature. Let’s try to understand this using an example:
- RADAR measures range and angle, but then it needs to be converted to coordinates using trigonometric functions.
x=rcos(θ),y=rsin(θ), where cos and sin make it non-linear.
While on the other hand, cameras use projective geometry to map 3D real-world points and then convert it to 2D pixel coordinates. This kind of non-linear function involves a transformation called homography.
- GPS, odometers, etc. are linear (straightforward proportional relationships) and the sensor fusion between linear and non-linear functions could raise various challenges. These include mismatched assumptions, verification complexity, error propagation and amplification (especially when a minor error in linear systems could be amplified when passed on to a non-linear function.), isolation and safety architecture (where ISO 26262 recommends no interference from other components. While with linear and non-linear functions tightly coupled in a sensor fusion pipeline, assigning safety levels for different systems becomes a challenge).
- Linear functions are also easy to test while non-linear functions can be tested mostly by simulation. When these functions are coupled, you could lose traceability and coverage that is required for ISO 26262 compliance.
How Does Sensor Fusion Work?
Let’s imagine a scenario, where your vehicle has autonomous functions, what does it need to know to navigate through the traffic successfully?
- What is the position of the vehicle?
- How fast is the vehicle? (velocity)
- What’s around the vehicle? (objects/object detection)
Just like humans who have senses to gather information based on the situation, in the same way autonomous cars have sensors and algorithms that help them to assess the situation and maneuver the vehicle.
Once you have collected the data from these sensors, you need to make it coherent as the data could be noisy. The noisy sensor readings need to be combined to generate a single clean estimate.
How is the estimation done?
By using Kalman Filters or Extended Kalman Filters, which is a mathematical algorithm, providing an optimal estimate about the state of the system.
There are two things that need to happen here: Prediction (estimation of what should be happening), and Update (as the sensor data comes, the estimations are refined). KF or EKF linearizes the non-linear equations around the real-time estimate based on calculus. This is followed by the application of KF logic on the linear version.
The Role of ISO 26262 in Sensor Fusion
Now that we have looked at how intelligence is applied, one thing we need to remember is that applying intelligence without defining safety makes the system ineffective. That is why we need safety standards.
The purpose of ISO 26262 is to lay down a framework that will help automotive (electronic and software) systems to operate safely in various conditions, minimizing the real chances of failures. ISO 26262 addresses sensor fusion through systematic lifecycle. Let’s try to understand the systematic lifecycle based on the following areas:
- Hazard Analysis and Risk Assessment (HARA): The functional safety lifecycle starts with HARA, which is covered in part-3 of ISO 26262 document. When it comes to sensor fusion, HARA helps in identifying potential hazards, such as:
- Failure to detect objects (humans or obstacles).
- Faulty sensor data.
- Inaccurate estimation of object position and velocity in the vicinity.
All the hazards are identified or analyzed based on severity (S), exposure (E), and controllability (C) to determine the appropriate ASIL level.
- Assigning ASIL Level: ASIL score could range anywhere between A (lowest) to D (highest). Sensor fusion is involved in critical functions of a vehicle, hence, their ASIL level can range anywhere from B to D. The ASIL level is assigned based on the safety requirement derived from HARA.
- Defining Safety Goals and Functional Safety Requirements: Once you have assigned ASIL levels, the next step would be defining the safety goals for the sensor fusion systems. Defining safety goals means setting high-level objectives of what the system is expected to achieve.
- Enabling timely object detection within the defined operational range.
- Mitigating errors from individual sensor data through redundant sensing and cross-validating them across other sensor data.
- Keeping accurate estimates of the object’s position, while making sure any errors stay within the limits.
These safety goals will eventually help you to derive the functional safety requirements. These requirements help in specifying how the sensor fusion algorithm and the hardware respond to fulfill the safety requirements.
- Implementation and Architecture Design: The next step ideally would be to design and develop the sensor fusion system to meet the safety objectives we defined. Let’s explore some of the architectural decisions.
- Using Redundant Inputs: The system basically uses multiple sensors to detect an object (using camera, LiDAR, and RADAR) to avoid any failure, which may trigger an incorrect response that may prove to be fatal.
- Using Different Sensors: The object on the road is captured by multiple sensors with different failure modes to capture the same object. This reduces the chances of failures.
- Modular Separation: The principles of functional safety emphasize isolation between safety-critical and non-critical functions.
Why do we have to do that?
One of the main aspects of Functional Safety is isolating systems to ensure that the failure of one system will not affect the others. If systems are separated, a non-critical failure will have no effect on a safety-critical system.
Separation of core functions (like collision avoidance and object detection etc.), because they affect the functioning of the vehicle or plays a key role in avoiding an accident, and non-core functions (such as infotainment systems and personalization).
The point we are trying to establish is that sensor fusion pipelines cannot be shared between core and non-core functions or they will affect the functioning of each other.
Creating logical and physical separation within the hardware and software layers to prevent failures. Physical separation happens on either separate SoCs or on different cores, which helps in preventing cascading failures.
With ISO 26262, you assign an ASIL level to different systems and all these systems have different ratings, right from A to D (which is the highest). Modularization makes it easier to decompose functions and develop them according to different ASIL levels.
- Time synchronization: The inputs that are captured need to be time-aligned in a sensor fusion pipeline to enable consistency in the output.
- Fail-silent and Fail-operational: When it comes to a fault occurring, a decision has to be made if the system should be allowed to operate safely (fail-operational) or shut down (fail-silent).
- Verification & Validation: Now that the sensor fusion system is developed, the next logical step would be verifying and validating it. There are various parts to this as the hardware and software pieces need to operate in unison in different real-world scenarios.
- HIL Testing: Using actual sensors or control units in a simulated environment to validate how the system would respond.
- Simulation Testing: Using synthetic data to simulate various scenarios to test sensor fusion.
- On Road Testing: The testing can also be done on-road but in a controlled environment to verify the functioning of the system.
- Fault Injection: Implementing failure scenarios to see how the system would respond.
- Achieving Compliance: This would be the final step and probably a very important one, where you document every activity and ensure that it is complete traceability.
Documenting activities such as hazards, root cause analysis, safety goals etc. to back the implementation.
Preparing safety manuals and defining guidelines for instructions for integrators, end users and others.
Ensuring the processes, development practices, and tools meet the requirements of the compliance.
Wrapping Up
The automotive industry is moving towards more software driven and connected parts and functional safety according to ISO 26262 is evolving to accommodate these technological shifts. When it comes to MosChip, we build HMI and Infotainment solutions , object/lane detection models and others and so on that involve FPGAs, CPUs, and MCUs.
Our team of experts has experience working with adaptive AUTOSAR, FuSa (ISO 26262), and MISRA C compliances as well as autonomous driving systems, vehicle diagnostics, telematics, and middleware.
To know more about how we can help you with your automotive functional safety requirements and how we enable integrated product engineering in AI-led product era, get in touch with us.