Cartoon drawing depicting two developers looking at code with speech bubbles. One says "It's not a bug" and the other replies "It's a feature".
Cartoon drawing depicting two developers looking at code with speech bubbles. One says "It's not a bug" and the other replies "It's a feature".

The Human Element in Code: Why AI Can’t (Yet) Replace Car Software Developers

[Ed. note: As we recharge and gear up for the upcoming year, we’re revisiting our top ten posts of 2023. Enjoy this favorite, and we look forward to connecting with you in 2024.]

Amidst the buzz surrounding AI advancements, anxieties are growing about artificial intelligence taking over software development roles. The scenario often painted is one where business executives and product managers bypass software developers, directly instructing AI to build their desired software. However, after 15 years in software creation from sometimes vague specifications, I find these concerns somewhat overstated.

While coding itself can present challenges, debugging code is rarely a lengthy process. Once you grasp the syntax, logic, and essential techniques, coding becomes relatively straightforward – most of the time. The real bottlenecks usually arise from defining the software’s purpose. The most demanding aspect of software development isn’t writing lines of code; it’s crafting clear, effective requirements – and these requirements remain firmly in the human domain.

This article will delve into the crucial relationship between software requirements and the resulting software, highlighting what AI truly needs to deliver effective outcomes, particularly in complex fields like Coding Software For Cars.

Feature or Bug? The Perils of Unclear Requirements

Early in my career, I joined a project mid-development to accelerate the team’s progress. The software’s core function was to configure customized products on e-commerce platforms.

My task involved generating dynamic terms and conditions. These terms were conditional, varying based on the product type and the customer’s US state due to differing legal stipulations.

During development, I identified a potential flaw. A user could select a product type, which would generate the correct terms and conditions. However, later in the workflow, the system allowed the user to switch to a different product type while retaining the initially generated terms. This contradicted a key feature explicitly outlined and signed off on in the business requirements document.

I innocently inquired with the client, “Should the option to override the correct terms and conditions be removed?” The response I received is etched in my memory. With unwavering certainty, the senior executive stated:

“That will never happen.”

This was a seasoned executive, deeply familiar with the company’s operations and specifically chosen to oversee this software project. The ability to override the default terms had been explicitly requested by this same individual. Who was I, a junior developer, to question a senior executive of a paying client? I dismissed my concern and moved on.

Months later, just weeks before the software launch, a client-side tester reported a defect, which was assigned to me. Upon reviewing the defect details, I couldn’t help but laugh.

The very issue I had flagged – the ability to override default terms, deemed impossible by the client – was now occurring. And guess who was tasked with fixing it? And guess who was initially blamed?

The fix itself was simple, and the bug’s impact was minimal. However, this experience became a recurring theme throughout my software development career. Conversations with fellow software engineers confirmed I wasn’t alone. The problems grew larger, more intricate, and costlier, but the root cause often remained the same: ambiguous, inconsistent, or simply incorrect requirements.

Cartoon drawing depicting two developers looking at code with speech bubbles. One says "It's not a bug" and the other replies "It's a feature".Cartoon drawing depicting two developers looking at code with speech bubbles. One says "It's not a bug" and the other replies "It's a feature".

AI’s Current Landscape: From Chess to Self-Driving Challenges

Artificial intelligence, a concept with a long history, has seen recent high-profile advancements that have sparked media attention and even congressional discussions. AI has already demonstrated remarkable success in certain domains. Chess immediately comes to mind as a prime example.

AI applications in chess date back to the 1980s. It is widely acknowledged that AI now surpasses human chess-playing capabilities. This isn’t surprising given chess’s finite parameters (though the game itself remains unsolved).

Chess always begins with 32 pieces on a 64-square board, adheres to well-defined, universally accepted rules, and has a clear, singular objective: checkmate. Each turn presents a finite number of possible moves. Playing chess is essentially executing a rules engine. AI systems excel at calculating the consequences of each move to select the optimal action for capturing pieces, gaining positional advantage, and ultimately winning.

Another area of significant AI investment is self-driving cars. Manufacturers have been promising autonomous vehicles for years. While some cars now possess self-driving capabilities, they often come with limitations. Many systems require active driver supervision, with drivers needing to keep their hands on the wheel; the self-driving feature isn’t fully autonomous. This is particularly relevant when considering the complexities of coding software for cars that operate in unpredictable real-world environments.

Similar to chess-playing AI, self-driving cars primarily rely on rules-based engines for decision-making. However, unlike chess, the rules for navigating every conceivable driving scenario are not clearly defined. Drivers constantly make countless split-second judgments – avoiding pedestrians, maneuvering around parked cars, navigating busy intersections. The accuracy of these judgments is the difference between a safe arrival and a trip to the emergency room.

In technology, the gold standard is often “five or even six 9s” of availability – meaning a website or service is operational 99.999% (or 99.9999%) of the time. Achieving the initial 99% availability is relatively less demanding. It allows for over three days – 87.6 hours – of downtime annually. However, each subsequent “9” added exponentially increases the cost and complexity. Reaching 99.9999% availability reduces permissible downtime to a mere 31.5 seconds per year, requiring significantly more rigorous planning, effort, and expense. While achieving 99% may not be trivial, it’s proportionally much easier and cheaper than attaining that final fraction of perfection.

Availability Downtime per Year
99% 87.6 hours
99.9% 8.76 hours
99.99% Less than 1 hour
99.999% 5.2 minutes
99.9999% Roughly 31.5 seconds

No matter how advanced AI becomes in self-driving, the inherent risk of accidents and fatalities remains. These risks and their consequences are, unfortunately, a daily reality with human drivers. While the acceptable accident and fatality rate for autonomous vehicles is yet to be determined by governments, it must logically be at least as good as, if not better than, human driving.

The primary reason achieving this level of safety is so challenging is the vastly greater number of variables in driving compared to chess – and crucially, these variables are not finite. The first 95% or even 99% of driving scenarios might be predictable and manageable. However, the remaining edge cases beyond that 99% are numerous, and while they may share some commonalities, each is ultimately unique: other vehicles driven by unpredictable humans, road closures, construction, accidents, weather events, even freshly paved roads lacking lane markings. It’s exponentially harder to train an AI model to recognize and appropriately respond to these anomalies and edge cases, preventing accidents. Each edge case, while sharing some characteristics with others, is rarely identical, complicating the AI’s task of identifying the correct response. This complexity highlights the immense challenge in coding software for cars that can handle the unpredictable nature of driving.

AI Can Generate Code, But Software Requires Human Insight

Creating and maintaining software, especially complex systems like those in modern vehicles, shares more similarities with driving than with chess. Software development involves far more variables, and the “rules” often rely on human judgment and interpretation. While there’s a desired outcome when developing software, it’s rarely as singular and clearly defined as winning a chess game. Software is rarely “finished”; features are added, bugs are fixed, and it’s an ongoing, evolving process. Unlike chess, a software project doesn’t simply end after a win or loss.

In software development, we utilize technical specifications to bring our software designs closer to the tightly controlled rules engine of chess. Ideally, specifications detail expected user behaviors and program flows – step-by-step instructions like “when a user buys an e-sandwich: click this button, create this data structure, run this service.” However, such comprehensive specs are rare. More often, we receive feature wishlists, napkin sketches of wireframes, and ambiguous requirements documents, and are then asked to “make our best judgment.” This is particularly true in innovative fields like coding software for cars, where requirements can be fluid and evolving with technological advancements.

Even worse, requirements often change or are completely disregarded. Recently, I was asked to assist a team in developing a system to disseminate health information related to COVID-19 in regions with unreliable Wi-Fi access. The proposed solution was an SMS-based survey application – using text messages for data collection. Initially, I was enthusiastic about contributing.

However, as the team described their vision, concerns arose. Asking a retail customer to rate their shopping experience on a scale of 1-10 via SMS is straightforward. Conducting multi-step surveys with multiple-choice questions about COVID-19 symptoms via SMS is significantly more complex. While I didn’t refuse the project, I raised numerous potential points of failure and urged the team to clearly define how the system would handle incoming responses for every question. Would answers be comma-separated numbers corresponding to answer options? What would happen if a submitted answer didn’t match any of the provided options?

After considering these questions, the team reached a consensus: abandoning the project was the most prudent course of action. Believe it or not, this was, in my view, a successful outcome. Proceeding without clear solutions for handling potential errors in user-submitted data would have been far more wasteful.

Is the vision of AI-driven software development simply allowing stakeholders to directly instruct a computer to create such an SMS survey system? Will AI proactively ask probing questions about handling data validation and error scenarios when collecting survey data via SMS? Will it account for the inevitable human errors in the process and devise strategies to manage them? These are critical questions, especially when considering the intricacies of coding software for cars, where safety and reliability are paramount.

To create functional software using AI, you must possess a clear vision of your desired outcome and articulate it with precision. Even when developing software for personal use, unexpected challenges often only become apparent once coding begins.

Over the past decade, the software industry has largely transitioned from the waterfall methodology to agile development. Waterfall mandates complete requirement definition before any coding starts, while agile embraces flexibility and iterative adjustments throughout the development process.

Countless waterfall projects have failed because stakeholders, despite believing they fully understood and documented their requirements, were ultimately disappointed with the delivered product. Agile development emerged as a response to these shortcomings.

AI may prove most effective in rewriting existing software for newer hardware or more modern programming languages. Many organizations still rely on software written in COBOL, a 60-year-old language, while the pool of COBOL programmers is shrinking. If requirements are perfectly defined, AI might indeed produce software faster and cheaper than human teams. I believe AI could replicate existing software more efficiently than human programmers, but this is because the crucial groundwork of defining what that software should do has already been done by humans.

AI might excel in a waterfall-style development process – often ironically referred to as a “death march.” But who truly struggles with waterfall? We do, humans. And the bottleneck isn’t the coding phase after requirements are finalized. It’s everything before that – the requirement definition stage itself. Artificial intelligence possesses remarkable capabilities, but it cannot read minds or inherently understand what users truly need or want. In the complex domain of coding software for cars, and software development in general, the human element of understanding, interpreting, and refining requirements remains indispensable.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *