Israel MoD to test a Combat Robotic Autonomous Vehicle

The Israel Ministry of Defense will begin testing a robotic unmanned vehicle (photo above) )called M-RCV (Medium Robotic Combat Vehicle), developed by the Ministry’s Directorate of Defense Research and Development (DDR&D), the Tank and APC Directorate, and Israeli security industries. The robotic combat vehicle is presented this week at Elbit Systems’ pavilion at the Eurosatory Defense and Security Exhibition in Paris.

The vehicle includes a new robotic platform type BLR-2 made by BL, a 30 mm autonomous turret developed by the Tank and APC Directorate for the “Eitan” APC, Elbit’s “Iron Fist” Active Protection System, fire control and mission management systems, robotic autonomous kit and situation awareness systems. The vehicle also features a capsuled drone for forward reconnaissance missions, and a passive sensing kit developed by Elbit Systems and Foresight.

The technological demonstrator has the ability to carry heavy and varied mission loads, and a built-in system for transporting and receiving UAVs. The vehicle will also incorporate sights, an IAI missile launcher, and Rafael Advanced Defense Systems’ “Spike” missiles. The M-RCV’s capabilities include a highly autonomous solution for forward reconnaissance, and controlled lethality in all-terrain conditions during the day and night in all-weather scenarios.

The system was developed as part of the autonomous battlefield concept led by IMoD, and it is expected to start field tests during 2023 in representative scenarios.

Needed a Radar to distinguish between Humans and Animals

The Directorate of Defense Research & Development in The Israeli Ministry of Defense (“Mafat”) announced its new competition, MAFAT Radar Challenge, for the development of capability to accurately distinguish between humans and animals in radar tracks? The winner will receive $40,000. The competition’s objective is to explore automated, novel solutions that will enable classification for humans and animals with a high degree of confidence and accuracy.

While some object types are easily distinguishable from one another by traditional signal processing techniques, distinguishing between humans and animals, which are non-rigid objects, tracked in doppler-pulse radars, is a difficult task. Today, the task of classifying radar-tracked, non-rigid objects is mostly done by human operators and requires the integration of radar and optical systems.

Why is this a difficult task?

Classification of radar-tracked objects is traditionally done by using well-studied radar signal features. For example, the Doppler effect (Doppler shift) and the radar cross-section (RCS) of an object can be utilized for the classification task, however, from the radar system’s perspective, looking at the tracked objects through the lens of those traditional features, humans and animals appear very similar.

Microwave signals travel at the speed of light but still obey the Doppler effect. Microwave radars receive a Doppler frequency shifted reflection from a moving object. Frequency is shifted higher for approaching objects and lower for receding objects. The Doppler effect is a strong feature for some classification tasks, such as separating moving vehicles from animals. However, humans and animals are typically moving at the same range of velocities.

The radar cross-section (RCS) is a measure of how detectable an object is by radar. An object reflects a limited amount of radar energy back to the source, and this reflected energy is used to calculate the RCS of an object. A larger RCS indicates that an object is more easily detected by radars. Multiple factors contribute to the RCS of an object, including its size, material, shape, orientation, and more.

The RCS is a classic feature for classifying tracked objects. However, it turns out that the RCS of humans is similar to the RCS of many animals; thus, RCS alone is not a good enough separating feature as well. The task of automatically distinguishing between humans and animals based on their radar signature is, therefore, a challenging task.

The objective of the competition is to explore whether creative approaches and techniques, including deep convolutional neural networks, recurrent neural networks, transformers, classical machine learning, classical signal processing, and more, can provide better solutions for this difficult task.

Mafat is interested in approaches that are inspired by non-radar fields, including computer vision, audio analysis, sequential data analysis, and so on. It provides real-world data (I/Q Matrix), gathered from diverse geographical locations, different times, sensors, and qualities (high- and low-signal to noise ratio—SNR). The competitor’s mission is to identify whether the segment of the tracked object is an animal or a human.