Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. The software industry has long suffered from cost-cutting and efficiency gains. Stop blindly advocating for developers to use AI to write code.
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

The software industry has long suffered from cost-cutting and efficiency gains. Stop blindly advocating for developers to use AI to write code.

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote last edited by
    #1

    The software industry has long suffered from cost-cutting and efficiency gains. The prolonged development cycles, seemingly endless release timelines, and continuously emerging defects hardly match the caliber of this elite team. Generative AI appears to offer a glimmer of hope, with its refreshing performance leading many to think: generative AI can automatically produce code at low cost, is repeatable, and its disposable capability resembles cloud resources—if a piece of code isn't suitable, just discard it and generate a new one. Does this mean we no longer need such an elite team?

    When answering our questions, generative AI occasionally provides seemingly plausible answers. However, a quick fact-check often reveals these answers to be superficial—either entirely fabricated or nonsensical, which contradicts the reputation of artificial intelligence. This is the so-called hallucination of generative AI—due to the lack of reliable training data, it arbitrarily pieces together a false response.

    Large model technology continues to evolve, with perceptible reductions in hallucination effects. However, when deployed in specific domains and use cases, these hallucination phenomena still occur. In this article, I will share applications of generative AI in software development and three key hallucinations it introduces.

    Various software tool vendors are iterating on their code assistant products, with GitHub's Copilot being the most prominent example. They claim it can improve programmer productivity by over 55%, and those elegant demonstration videos do appear remarkably swift.

    Image

    (Image source: https://github.blog/2022-09-07-research-quantifying-github-copilots-impact-on-developer-productivity-and-happiness/)

    But does this mean software delivery can be accelerated by 50%?

    The demo codes are questionable, and feedback from more programmers adopting Copilot in their projects seems to indicate that speed improvements are mostly limited to common functionalities. For example, sorting arrays, initializing data structures, or some very simple template code.

    Handling repetitive utility code with AI is one thing. But for a software under development, how much of such code needs to be repeatedly developed? This is worth discussing. Not to mention that most of the time, they only need to be written once and then encapsulated for reuse. As for the considerable amount of business code, at what speed will programmers proceed? You can generate a sufficient amount of business code with AI, but whether it's safe is probably an even bigger issue.

    There are two issues worth paying attention to.

    One is programmers' selection of code provided by AI. Since AI can easily offer multiple implementations of methods, programmers inevitably need to try to identify the optimal option from them.

    Is this one better? Or that one? Hmm, there are actually five different implementations. First, I need to understand how each piece of code works, then move on to the next one. This implementation is quite elegant, but unfortunately, the unit test failed. Let's try the next one.

    Programmers' curiosity is fully stirred by code assistants. Their restless minds scatter linear thinking habits like broken shards. What programmers forget is not just development discipline, but also time.

    Secondly, software has its own lifecycle.

    Clearly, by the time programmers start writing code, many things have already happened, and more will continue to happen until the system goes live. These include but are not limited to: gathering requirements, understanding requirements (from specifications to user stories), testing, maintaining infrastructure, and the endless stream of fixes.

    What I mean is, even if AI helps programmers write faster, this stage is only a part of the software lifecycle. Relevant statistics have long shown that programmers spend only 30% of their time writing code, with more time spent trying to understand what they need to implement, as well as designing and learning new skills.

    https://s3-sg.ufileos.com/nodebb-test/spider_image/a06b7dac-8482-4264-9677-3d968c15a9d5.jpg

    (图片来源:https://github.blog/2023-06-13-survey-reveals-ais-impact-on-the-developer-experience/)

    Human-written code inevitably contains defects—this is a fundamental consensus in software quality. Paradoxically, the more experienced the programmer, the more likely they are to produce obscure issues that may remain undetected for long periods. Production environment problems are particularly nerve-wracking, yet such concerns are difficult to avoid.

    AI-generated code sounds sophisticated—does it deliver flawless results? Unfortunately, the answer may disappoint.

    The large language models behind generative AI use massive internet text corpora as data sources. Although the technology continues to improve, the sheer volume of existing biased data online remains substantial—including vast amounts of flawed code. This means even carefully selected code snippets from AI assistants may contain defects, as problematic code originating from someone halfway across the globe might coincidentally become a developer's chosen solution.

    More critically, generative AI has an amplifying effect. When programmers adopt defective AI-generated code, tools like Copilot record this behavior and subsequently recommend similar flawed code patterns in comparable scenarios. The AI doesn't truly understand the code's quality—it's simply incentivized to continue providing such suggestions. The potential consequences are predictable.

    Programmers must strictly adhere to team development discipline and maintain unified code standards, as this ensures others can understand the code and makes it easier to identify and fix potential issues. However, the varying styles of code provided by code assistants may also introduce more confusion.

    Code defects are just one of the sources of irreparable problems in software, and even a minor part. The process of building software is essentially a process of knowledge production and creation. Various roles involved at different stages of the software lifecycle collectively understand and analyze software requirements, then translate them into code. Throughout team and personnel changes, they also transmit this information, which appears as requirements and code but is fundamentally knowledge.

    However, knowledge tends to decay, and the transfer of knowledge assets inevitably encounters pitfalls. For instance, unreadable code, failure to continuously update documentation, or the replacement of entire teams—these are the root causes of persistent bugs and issues in software. Artificial intelligence has not yet solved these thorny problems in software engineering, at least not in the short term.

    AI code assistants do appear to resemble well-informed programmers. Some are even willing to treat them as partners in pair programming practices. Human resource costs have always been a headache for IT teams—top talent is too expensive, suitable candidates are hard to find, and training proficient programmers from scratch takes too long. With the support of artificial intelligence and code assistants, does this mean teams could be reduced by nearly half?

    AI and code assistants not only fail to provide the aforementioned guarantees of speed and quality but also expect users to be sufficiently experienced programmers to fully leverage their advantages. These experienced programmers must be capable of judging code quality, assessing the impact on existing production code, and possess the patience and skill to meticulously adjust prompts.

    In this article, the author discusses the many issues to be aware of when using code assistants, as well as the meticulous thought process behind them. The uncertainty introduced by code assistants can lead to two types of risks: compromising code quality and wasting time. This actually reflects the introspective ability of a sufficiently seasoned programmer.

    Only in this way can the code assistant comfortably play the role of a well-informed novice, while the experienced programmer serves as the gatekeeper—she remains the one responsible for submitting the code. In this sense, what AI truly changes is the programming experience.

    AI and code assistants demonstrate remarkable effectiveness in solving simple, repetitive problems. However, the software development process involves numerous scenarios requiring human expertise to address complex challenges. These include dealing with the growing architectural complexity and scope of software systems, responding to market and business requirements, facilitating cross-role communication and collaboration, as well as handling more contemporary issues concerning code ethics and security.

    Although judging whether programmers are sufficiently professional and skilled isn't as straightforward as counting numbers, we can also say that introducing AI and code assistants while downsizing development teams yields uncertain results, currently appearing more harmful than beneficial.

    The essence of generative AI is pattern transformation - converting one form of text into another, and advanced code assistants are no exception. If we regard AI code assistants involved in software construction as a panacea for solving numerous software engineering challenges, we're likely oversimplifying complex problems.

    What have we been talking about so far?

    We are actually discussing how to measure the effectiveness of investing in AI in software development. Investing in AI is not as simple as purchasing a code assistant license and then sitting back to enjoy cost reduction and efficiency gains. Instead of constantly asking, 'How do we measure the impact of investing in AI and code assistants?', it's better to ask, 'What exactly should we measure?' Starting with the four key metrics defined by DORA is a wise choice: Lead Time for Changes, Deployment Frequency, Mean Time to Recovery (MTTR), and Change Failure Rate.

    The following basic measurement principles are provided for reference:

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups