Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
  1. Home
  2. AI Insights
  3. Debunking 8 Common Myths About AI in Business
uSpeedo.ai - AI marketing assistant
Try uSpeedo.ai — Boost your marketing

Debunking 8 Common Myths About AI in Business

Scheduled Pinned Locked Moved AI Insights
techinteligencia-ar
1 Posts 1 Posters 3 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • baoshi.raoB Offline
    baoshi.raoB Offline
    baoshi.rao
    wrote on last edited by
    #1

    Artificial intelligence has been a hot topic in recent years, with AI business applications drawing significant attention from major enterprises. While we've all heard various 'myths' about AI applications, the validity of these claims may be questionable.

    Misunderstandings exist about any emerging technology, but with AI, they seem particularly pronounced. Perhaps this is because the potential scope of AI's impact has given it a somewhat mythical status.

    "AI is often misunderstood because we're exploring a vast universe – delving into the unknown can be confusing and frightening," says Bill Brock, VP of Engineering at Very.

    This becomes a particular problem for IT leaders trying to determine the actual scale of AI applications in their organizations.

    "While AI in business is becoming increasingly common, there remain significant misunderstandings about its use cases and how it can improve or update legacy systems," Brock notes. "While we might romanticize the idea of robots becoming our colleagues, it's essential to understand how these different technologies can enhance our systems and create more effective environments."

    In reality, "romanticizing technology" is more the stuff of sky-high sales pitches than the bottom-line results strategic CIOs achieve with AI.

    And achieved they have: A new Harvard Business Review Analytic Services report, "The Practical AI Implementation Guide," details how tech executives at companies including Adobe, 7-Eleven, Bayer Crop Science, Caesars Entertainment, Capital One, Discover, Equifax, and Raytheon have documented AI successes.

    Moreover, romanticized realities often spawn various myths and misconceptions that hinder achievable goals. Therefore, we asked Brock and other experts to identify common AI myths in today's businesses to help IT leaders and other professionals separate fact from fiction.

    They aren't, and understanding the difference is crucial for various reasons – from avoiding snake-oil solutions to building AI initiatives for tangible success. More scientifically speaking, machine learning is considered a specific sub-discipline of AI.

    "In many conversations, I find little distinction made between these terms," says Michael McCourt, Research Scientist at SigOpt. "This can be problematic. If a company's decision-makers think 'building my classification model' equates to 'using our data to solidify our decision-making process,' important steps in properly interpreting the model's structure and meaning become necessary."

    Failing to recognize this myth leads companies to underinvest in AI teams, potentially lacking sufficient personnel to connect these models' development and interpretation with broader business contexts – a setup for AI team failure.

    AI and machine learning aren't the only terms causing confusion. Similar to machine learning, AI and automation often get conflated because there is indeed a relationship between them – an important one.

    "As people become more familiar with AI, they learn that AI is a machine capable of thinking – or at least making informed decisions based on predefined models and algorithms – whereas automation simply completes tasks without human intervention," Brock explains.

    "Automation doesn't necessarily mean AI, but some of AI's most impactful use cases dramatically enhance automation."

    An increasingly common (and problematic) misconception is that the only real prerequisite for AI success is vast amounts of data. Currently, AI and machine learning teams spend nearly all their time curating and cleaning data.

    "It's not the quantity of training data that matters, but the quality," emphasizes Rick McFarland, Chief Data Officer at LexisNexis Legal and Professional. "Large amounts of poorly or inconsistently labeled training data don't bring you closer to accurate results. They can actually deceive modelers by creating 'precise' results since variance formulas are inversely proportional to sample size. In short, you get precisely inaccurate results."

    We'll take a moderate stance here and predict that one of the most common lessons from early AI failures will be: We just threw massive data at it and assumed it would work. In early stages, bigger isn't necessarily better.

    "This can't be overstated – quality data is an indispensable part of effective algorithms," says Very's Brock. "People often misunderstand AI's capabilities and how to prepare for success. Poor data yields poor outcomes, no matter what problem you're trying to solve."

    Brock adds that AI and machine learning teams currently focus almost entirely on curating and cleaning data. Even if you're not at that stage yet, always prioritize quality over quantity.

    "Today's best practices focus on creating better training datasets using structured methods and bias testing," McFarland notes. "The result is modelers can actually work with smaller datasets obtained at lower costs."

    This isn't to say "more data" is inherently bad; in fact, it becomes increasingly necessary over time. But time is the key word: You need it to synchronize quantity with quality.

    Generally, no one expects immediate ROI from AI initiatives, but sometimes this technology gets portrayed as: Just turn it on and watch the magic happen.

    "AI and ML engines require training and need substantial data to learn. Some data can be seeded," explains Javed Sikander, CTO of NetEnrich. "However, most data comes from deployment domains where AI/ML systems focus their learning. Therefore, expecting AI/ML systems to deliver recommendations and insights from day one is unrealistic. Processes need establishing, and resources must be allocated across various environments to enable progressive learning. Only then does the magic occur."

    Diego Oppenheimer, CEO of Algorithmia, observes that organizations approach AI and ML much like any other software development.

    "The myth that AI/ML development is just software development persists," Oppenheimer states. "In reality, most ML projects fail largely because ML workloads behave very differently from traditional software, requiring distinct tools, infrastructure, and processes for large-scale deployment and management."

    Oppenheimer highlights these key challenges:

    1. Heterogeneity
    There's an enormous, ever-growing menu of languages and frameworks to navigate. "Data science is about choices, and it will get bigger before it gets smaller," Oppenheimer remarks.

    2. Composability
    AI and ML often involve synchronized pipelines of multiple components, each potentially built by different teams using different languages.

    Oppenheimer illustrates with a system requiring one model to select target images, another to extract text from those images, a third for sentiment analysis on that text, and a fourth to recommend actions based on that sentiment.

    While traditional app development might move toward this via microservices, Oppenheimer notes it remains relatively monolithic compared to AI/ML needs, requiring team adjustments.

    3. Development Process
    "In traditional software development, the output is code executed in controlled environments," Oppenheimer explains. "In machine learning, the output is an evolving ecosystem – inferences made through code interacting with live data. This demands a very different, more iterative cycle."

    4. Hardware/Infrastructure
    "[It's] still evolving: CPUs, TPUs, GPUs, edge computing, and countless new options – each with different strengths and challenges."

    5. Performance Metrics
    "ML-based performance metrics are multidimensional and highly context-sensitive," Oppenheimer points out, meaning no standard metric set applies to everyone or even most use cases.

    "If a retail fraud detection model makes errors in false positives, as long as it returns results quickly enough without disrupting the checkout process, its accuracy might reach 75%," he said. "Fraud detection models used by forensic accountants may trade performance for higher accuracy."

    Sometimes, we make daunting things seem more manageable by comparing them to the familiar: like saying, 'We've been here before—we've had this.'

    In this context, it might lead IT teams to view AI as just another technology adoption cycle. But that's not the case, says Guy Ernest, VP of Data and AI at AllCloud.

    "AI has the potential to be more like the human brain or body: the more you use it, the stronger and smarter it becomes."

    "Most technologies are fragile," Ernest said. "The more you use them, the more complex they become, and the easier they are to break. AI has the potential to be more like the human brain or body: the more you use it, the stronger and smarter it becomes."

    No, AI isn't the solution to every business problem—at least not yet, notes McCourt of SigOpt. But he adds that companies that view AI as merely a tech industry trend are at risk.

    "The worst-case scenario is that a company may choose to opt out of the AI revolution, and if current trends continue, it could leave the company following the crowd rather than leading it," McCourt said.

    "This myth began and continues to permeate the business world because the early developers and adopters of AI were the most tech-savvy and advanced companies. But new literature and tools emerge daily, expanding the foundation for companies to start making AI-driven decisions."

    AI's mythical status partly stems from seeing it surpass human intelligence in certain areas. But that's when the narrative of 'robot overlords' starts to peak.

    "Machines can only be as smart as the data they can access and the actions they're programmed to take," said Sikander. "AI and machine learning can help us identify patterns in vast data and automate actions with minimal human intervention. But the algorithms and models built to compute these decisions and actions must be provided by humans."

    There's a related misconception that AI learns 'just like humans.' That's not the case today, says McFarland, Chief Data Officer at LexisNexis Legal & Professional.

    "Humans have inherent advantages in learning or problem-solving—such as boredom," McFarland said. "AI models never get bored or see the folly in their ways. They seek the best answer from nearly infinite possibilities, even chasing it deep into a well-known rabbit hole—possibly never emerging. In contrast, humans grow weary of pursuing endless possibilities, stop, reconsider the situation, and pursue a different path without being told."

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Don't have an account? Register

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Newsletter
    • Recent
    • AI Insights
    • Tags
    • Popular
    • World
    • Groups