Will AI replace the programmer? Not so fast

The field of artificial intelligence (AI) is revolutionizing automation, and large language models (LLMs) like ChatGPT are at the forefront. These models showcase impressive capabilities in generating textual and graphic content and engaging in interactive dialogues with humans. The application of AI in areas such as sales, customer service, and teaching is expanding, transforming professions we never thought computers could perform.

Code, being a language itself, is abundant on the web, presenting a vast pool of data with which neural networks can be trained to code on demand. This automation initially appeared with GitHub Co-pilot and is now further exemplified by ChatGPT 4, which can automatically generate code for any task.

In our technology-driven world, programming has become one of the most sought-after and esteemed professions. However, an ironic question arises: Can AI replace programmers?

Nir Dobovizki, a senior software architect at CodeValue, has embraced tools like ChatGPT as part of his programming routine. While these tools are impressive and make a programmer’s life more convenient, they do not pose a threat to their jobs, according to Nir.

Nir emphasizes the need to break down the concept of “artificial intelligence.” He views it as a marketing term, clarifying that there is no manifestation of true intelligence, understanding, or knowledge in these tools. They are statistical instruments, albeit remarkable ones. AI lacks the genuine ability to differentiate between truth and falsehood, or to distinguish facts on its own.

“For me, one of the primary purposes of code is effective communication to fellow programmers, ensuring they understand my intentions beyond mere functionality. Code is akin to a story, and clarity matters. These tools, although capable of performing tasks, often lack style, hindering collaborative coding efforts.”

So how does one effectively utilize these tools?

“It’s undoubtedly an automated tool that offers significant assistance—an advancement in automation. However, I only use it for tasks I’m familiar with. If it’s something I’m unsure about, I won’t be able to verify whether the AI has executed it correctly. I rely on these tools for problems I comprehend, saving me considerable typing and providing confidence in the accuracy of the written code.”

The term “artificial intelligence” itself poses challenges, obscuring the creation and operation of these models. Nir Dobovizki highlights that each model and neural network is a product of the data used to train and validate it. Training these models necessitates abundant and high-quality data, with the exact quantity being uncertain in advance. It’s a process of trial and error, with verification playing a vital role. However, biases can emerge at both the training and verification stages, leading to gender and ethnic biases in certain applications like facial recognition systems that excel at identifying white individuals but fail with Black individuals. Bias stems from the data used for training and validation.

Nir Dobovizki

“Perfect verification is essentially impossible; biases may persist due to how we test the model. Therefore, involving humans in the training and operational processes is crucial for vigilance and error correction. We must exercise great caution, particularly when dealing with systems that impact human life, such as autonomous driving systems. Any bias within such systems could jeopardize human safety.”

At present, these tools are incapable of coding complex tasks requiring a deep understanding of multifaceted interactions. These challenges remain exclusively within the realm of skilled and experienced programmers. Even when these tools generate convincing code for intricate problems and perform well in a development environment, there is no way to ensure they are free from critical bugs. These tools empower programmers and may reduce the required workforce, but they are far from replacing programmers, both in their capabilities and in the foreseeable future.

In conclusion, AI-driven tools have ushered in a new era of automation in programming, amplifying the capabilities of programmers while preserving their indispensable role in tackling complex challenges. The collaborative efforts of humans and AI continue to shape the future of programming, offering promising prospects for enhanced productivity and innovation.

[Pictured above: OpenAI HQ. Source: Wiki]

“The payload is the system. Drones become sub-systems”

By: Roni Lifshitz

Thirdeye Systems received a grant of NIS 900,000 from the Israel Innovation Authority and the Ministry of Defense for the development of a Non-GPS based navigation solution, totaling in NIS 1.8 million. Non-GPS based navigation eliminates the dependence on satellite communications and immune to GPS jamming efforts. This is a new market for a company that was known mainly for its electro-optical smart algorithmics.

Lior Segal, CEO and co-founder  of the Natanya-based company (in the center of Israel), told Techtime the market is rapidly changing: “In the past, the drone was considered by customers as a ‘main system’. “However, today we see a different approach:, drones are becoming a sub-system within the complete solution. The center of interest has shifted towards the payload. This dramatic change helped us to win a Ministry of Defense NIS 9 million project, in which we take the place of a main contractor.”

Locating people inside buildings

Thirdeye was founded 12 years ago by Lior Segal, the CTO Yoel Motola and the COO Gil Barak. The idea for the company resulted military service were Lior and Yoel took active role in urban warfare events in 2009 in their role as combat infantry officers. Segal: “We started talking about a problem we faced as warriors: how to find out a person inside a room without using a screen. This is how the idea of ‘third eye’ was born: A thermal camera that identifies human beings and provides a silent warning to the warrior. We needed to find a way how to integrate this capability into a compact kit, flashlight-like gadget, placed on the personal weapon.”

With the help of Ministry of Defense Thirdeye had developed its unique algorithm and IR sensors for the warrior personal weapon, which were delivered to IDF special units that immediately put them into operational use. In 2015 it won a NIS 2 million order from the MoD for these systems called Cerberus. However, at that drones began to play a vital role in the civil and military markets, and Thirdeye decided to adjust its technology to be used with drones.

Thirdeye's Chimera payload for drones. Credit: Techtime
Thirdeye’s Chimera payload for drones. Credit: Techtime

The first product for the new market was Chimera: An electro-optical system that includes a thermal camera, a daylight camera, and people identification algorithmics, enabling identification across wide areas. The system performed its first baptism of fire during Operation Guardian of the Walls in middle of 2021. Segal: “Our main market today is the local market. We collaborate with companies such as Elta Systems, Elbit, Aeronautics and the Israel Aerospace Industries.”

“The company employs fifty employees, with all the development, manufacturing and assembly works done in Israel. Even the AI systems’ database was developed here by us, without using external databases or sub-contractors. Our systems are platform-agnostic and can be installed on any drone – civil or military- as well.”

In April 2021 you went public on TASE. Why?

Segal: “This was our way of bringing funding to the company without being considered as a business partner of any costumer, to avoid deterring other costumers.”

What are your main current projects?

“Several products are currently in transition from R&D to serial production. We develop the Chimera-X to provide wider area grip. It is expected to reach maturation early next year. We are developing a platform for drones’ detection called Medusa and a new system for ground platforms. Unmanned vehicles will be able to use our systems for various missions, such as people tracking an people avoidance to prevent unwanted damages, especially in difficult terrain conditions.”

The core of your market is the military. What are the main trends in this market?

“We believe that Western societies refuse to pay heavy prices in a human’s lives, and therefore the need for autonomous instruments is growing. Western armies need many robotic tools in order to save human lives. The war in Ukraine illustrates how modern warfare turns into a multidimensional warfare: The warrior should be aware to everything that is happening around and above him.”

Translated by P. Ofer

 

NXP and Hailo Expand AI cooperation through MicroSys

NXP Semiconductors and Hailo announced a cooperation to provide joint AI solutions for automotive Electronic Control Units (ECUs). The new solutions will combine NXP’s automotive processors S32G and Layerscape, along with Hailo-8 processor. Hailo-8 is an AI processor for edge computing with up to 26 tera-operations per second (TOPS) at a typical power consumption of 2.5 W. The solutions offer an open software ecosystem for applications and software stacks.

The first solution, powered by the Arm based NXP S32G processor combined with up to two Hailo-8™ AI processors delivering up to 52 TOPS. The second solution, powered by the Arm based NXP Layerscape platform and combined with up to 6 Hailo-8 processors, delivers a high-performance of up to 156 TOPS. “We are excited to partner with a major player like NXP to demonstrate the true potential of AI for automotive,” said Orr Danon, CEO of Hailo.

“We look forward to continuing to work with NXP to expand our edge processing solutions to a broad range of demanding applications including industrial & heavy machinery, robotics, and more.” The NXP-Hailo joint solutions are already being utilized by customers, including MOTER Technologies, which is using the Arm-based NXP S32G processor combined with a Hailo-8 M.2 AI accelerator module for Usage-Based Insurance (UBI) applications.

The evaluation boards were designed and produced by the Germany-based MicroSys, who cooperates with NXP as well as with Hailo. The miriac® AIP-S32G274A and miriac® AIP-LX2160A NXP-Hailo automotive based application-ready platforms are available from MicroSys, as well as development platforms by NXP: BlueBox 3.0 (Layerscape LX2160A and S32G and GoldBox (S32G). Both are compatible with Hailo-8™ M.2 AI Acceleration Modules.

Vanti Brings Analytics to the Electronics Manufacturing

Above: The CEO Smadar David (right) and the CTO Nir Osiroff

AI tecchnologies may have a deep impact on the yield and effectiveness of elelctronics production lines. Vanti Analytics from Tel aviv is doing just that: It provides a SaaS-based solution already tested by adsvanced customer. Today the company announced a $4.5 million seed funding round led by True Ventures and More VC with participation of i3 Equity Partners and the private investor Ariel Maislos. Vanti has raised $6 million total since it was established in 2019.

The company is developing a cloud-based platform that helps manufacturing operations teams increase yields and throughput for electronic products. Its SaaS software platform autonomously leverages machine learning to dramatically reduce ramp-up time, errors and test time for electronics manufacturers. The company was established by the CEO Smadar David and the CTO Nir Osiroff, both are veterans of the automotive LiDAR sensor provider Innoviz Technologies, and of technological units in IDF.

Prior to founding Vanti, Smadar had served as MEMS & Mechanics Group Manager at Innoviz and Osiroff had served as a head of InnovizPro product line at Innoviz. “As a manufacturer in a very competitive environment, we’re always looking to speed up ramp-up time and serve our customers with the highest quality, volume and price,” said Omer Keilaf, CEO and co-founder of Innoviz. “That’s exactly where Vanti’s platform comes into the picture. We liked how fast it was integrated and demonstrated value leveraging our operations data.”

Suffolk County, NY, and Dynamic Infrastructure to maintain dozens of bridges through AI

The New York, Berlin and Tel Aviv based startup Dynamic Infrastructure is expanding its pilot project with the Public Works Department of New York’s Suffolk County using the world’s first deep-learning solution which allows bridge and tunnel owners and operators to obtain visual diagnosis of assets they manage in order to reduce direct and indirect maintenance costs. After the successful completion of a pilot involving one bridge, the parties have agreed to expand the use of the AI-based technology to 74 bridges in the county located on the eastern end of Long Island. Deployment of the technology is currently in process.

Dynamic Infrastructure is currently conducting projects in other states in the U.S. as well as in Germany, Switzerland, Greece, and Israel with private and public transportation bodies. The company’s clients operate a total of 30,000 assets, ranging from national, state, regional and municipal departments of transportation to Public-Private Partnerships (PPPs) and private companies.

“The latest project expansion aims to use our technology to cover the entire inventory by Q2 2021,” said Saar Dickman, Co-founder, and CEO of Dynamic Infrastructure. He added that Suffolk County is typical of the situation in the US at large, where data from the Federal Highway Administration deemed that approximately 30% of all bridges in the US were in fair or poor condition.

AI-based bridge maintenance

According to the American Society of Civil Engineers, which evaluates and publishes a report card on the U.S. infrastructure every four years, the country’s infrastructure was given a D+ grade and more than 56,000 bridges were classified as being “structurally deficient”.

The aim in the deployment in Suffolk County is to enable its Public Works Department to better coordinate and make the right decisions by prioritizing maintenance of its infrastructure assets. “The system allows any operator, inspector or maintenance engineer to have actionable intelligence at their fingertips in order to decide if, when and how the daily maintenance and  maintenance projects should be conducted, by supplying instant alerts about anomalies,” said Kevin Reigrut, member of Dynamic Infrastructure’s board of advisors and former executive director of the Maryland Transportation Authority.

The use of the novel technology translates into a huge annual savings for the Owners, and Operation and Maintenance engineers, and contractors. Dynamic Infrastructure’s AI-based, decision making, SaaS product continuously processes past and current inspection reports and visuals, identifying future maintenance risks and evolving defects. The proprietary technology provides live, cloud-based, risk analysis of any bridge or tunnel and automatically alerts when changes are detected in maintenance and operating conditions—before they develop into large-scale failures.

The platform creates a “visual medical record” for each asset, based on existing images taken from past and current inspection reports and interim inspections. The visual analysis is being done to any visual source, be it smartphones, drones, and laser scanning. The images are compared and serve as the basis for alerts on changes in maintenance conditions. They can be easily accessed through a simple browser and instantly shared with peers and contractors to speed maintenance workflows and improve budget expenditure.

Dynamic Infrastructure harnesses the power of AI to disrupt Operation & Maintenance of critical transportation assets. Founded by industry professionals with decades of operation and maintenance experience for PPPs and DOTs, Dynamic Infrastructure has become an industry leader and key driver of a data revolution in decision-making processes related to bridge and tunnel Operations & Maintenance. Headquartered in New York, NY, with offices in Germany and Israel, Dynamic Infrastructure maintains a close relationship with its clients.

AI-based Visual Assistance Tool for Technicians

TechSee from Tel aviv announced the completion of $30 million equity investment round co-led by OurCrowd, Salesforce Ventures, and TELUS Ventures. Founded in 2015,  the company has raised $54 million in funding to date. TechSee has developed a Computer Vision AI and Augmented Reality solution to assist makers and technicians in the “unboxing” and the installation of electrical and electronics products.

TechSee’s AI platform can automatically identify components, ports, cables, LED indicators, and more to detect issues and suggest resolutions, contact center agents, and field technicians. Via a simple tap of a screen, customers use their smartphone camera to show the virtual technician exactly what they see in their physical environment.

Using Deep Learning, the software (virtual technician) identify the product and visually guide the customer through the unboxing process using a suite of augmented reality tools, including Augmenting guidance for specific components, Tracking consumer and device movements to allow interactive guidance, Providing step-by-step instructions through the installation process and testing to verify the device works properly.

“Our vision is to get rid of the User Manual”

“There has been a significant increase in demand for contactless customer service technologies propelled by COVID-19 and the acceleration of digital transformation projects,” said Eitan Cohen, CEO of TechSee. “Our Visual Automation technology is at the heart of it. Our vision is to get rid of the user manual and replace it with dynamic AR assistants.”

TechSee recently announced a commercial partnership with Verizon to address this issue by bringing visual assistance to customers. It also established commercial partnerships with Vodafone, Orange, Liberty Global, Accenture, Hitachi, and Lavazza. The need to enhance the user’s product unboxing experience has brought many brands to showcase their product unboxing process using video.

In fact, YouTube reports an increase in product unboxing video views of 57% in one year, and an increase in uploads of more than 50%. These videos have more than a billion views annually. Google Consumer Survey underscores these statistics, with 20% of consumers (1 in 5) reporting that they’ve watched an unboxing video.

Hailo Challenges Google and Intel

AI chipmaker Hailo announced the launch of its M.2 and Mini PCIe high-performance AI acceleration modules for empowering edge devices. Integrating the Hailo-8 processor, the modules can be plugged into a variety of edge devices. The modules provides high performance Deep Learning-based applications to edge devices. Hailo’s AI acceleration modules seamlessly integrate into standard frameworks, such as TensorFlow and ONNX, which are both supported by its Dataflow Compiler.

Hailo announced that a comparison between the Hailo-8 average Frames Per Second (FPS) with competitors across multiple standard NN benchmarks shows that Hailo’s AI modules achieve a FPS rate 26x higher than Intel’s Myriad-X modules and 13x higher than Google’s Edge TPU modules. The Hailo-8 M.2 module (photo above) is already integrated into the next generation of Foxconn’s BOXiedge with no redesign required for the PCB.

“Manufacturers across industries understand how crucial it is to integrate AI capabilities into their edge devices,” said Orr Danon, CEO of Hailo. “Simply put, solutions without AI can no longer compete.” The Hailo-8 AI modules are already being integrated by select customers worldwide. More information on the Hailo-8 M.2 and Mini PCIe AI modules can be found here.

Hailo-8 vs. Intel Myriad-X(1) and Google Edge TPU(2) Performance across common Neural Network benchmarks