Skip to content
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

administrators

Private

Posts


  • EXCLUSIVE INTERVIEW: Sowmay Jain on Bhindi, Agentic AI, AI Fatigue & the Future of Work
    baoshi.raoB baoshi.rao
    AI Latest News ai-news

  • Electronic Arts Nears $50B Acquisition Deal
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-26T19:25:52+00:00

    The video game company Electronic Arts is nearing a $50 billion sale to a group of investors including Silver Lake and Saudi Arabia’s Public Investment Fund, according to a report from The Wall Street Journal.

    EA is best known for its annual sports titles in franchises like Madden NFL, FIFA, and NBA Live, as well as video game series like The Sims, Battlefield, Need for Speed, and Star Wars.

    According to the Journal’s report, this deal could be the largest leveraged buyout in history, meaning that the deal is mostly funded with debt.

    After the news broke that EA could be going private, stocks jumped 15% on Friday afternoon.

    AI Latest News ai-news

  • Renowned Roboticist Warns Humanoid Robot Bubble Will Burst
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-27T06:45:54+00:00

    Renowned roboticist Rodney Brooks has a wake-up call for investors funneling billions into humanoid robot startups: you’re wasting your money.

    Brooks, who co-founded iRobot and spent decades at MIT, is particularly skeptical of companies like Tesla and the high-profile AI robotics company Figure trying to teach robots dexterity by showing them videos of humans doing tasks. In a new essay, he calls this approach 'pure fantasy thinking.'

    The problem? Human hands are incredibly sophisticated, packed with about 17,000 specialized touch receptors that no robot comes close to matching. While machine learning transformed speech recognition and image processing, those breakthroughs built on decades of existing technology for capturing the right data. 'We don’t have such a tradition for touch data,' Brooks points out.

    Then there’s safety. Full-sized walking humanoid robots pump massive amounts of energy into staying upright. When they fall, they’re dangerous. Physics means a robot twice the size of today’s models would pack eight times the harmful energy.

    Brooks predicts that in 15 years, successful 'humanoid' robots will actually have wheels, multiple arms, and specialized sensors and abandon the human form. Meanwhile, he’s thoroughly convinced that today’s billions are funding expensive training experiments that will never scale to mass production.

    It’s far from the first time Brooks has poured cold water on expectations set by brash entrepreneurs and eager investors. Last year, he talked at length with TechCrunch about why the promise of generative AI exceeds its abilities and can even create more work in some cases.

    For example, the AI research nonprofit METR said this summer it had recruited 16 highly rated developers from large open-source repositories to measure the impacts of AI tools on real-world software development. It then assigned them nearly 250 real issues to address with the tools and without them, and measured their screens. When the developers used the AI tools, they took 19% longer to complete their tasks. As interestingly, they perceived that the AI had sped them up by 20%.

    Brooks has also long argued that AI is not the existential threat that many, including Elon Musk, have posited that it is. TechCrunch talked with Brooks about this back in 2017 at MIT, when the landscape looked very different but not entirely dissimilar to today’s playing field.

    At the time, Brooks said he was just starting to see more companies specializing in making data sets for machine learning — a trend that has only continued. Relatedly, he argued why it wasn’t necessarily a foregone conclusion that Big Tech companies would win in robotics, despite what long ago seemed an unsurmountable lead in the amount of data they control. Yet today’s leading robotics companies have not escaped those companies’ gravitational pull.

    Apptronik, a humanoid robot maker that has raised nearly $450 million from investors, counts Google among its backers and partnered with Google’s DeepMind robotics team late last year to 'bring together best-in-class artificial intelligence with cutting-edge hardware and embodied intelligence.'

    Figure, similarly, is backed in part by Microsoft and OpenAI Startup Fund and partnered with OpenAI in February 2024 to combine OpenAI’s research with its own 'deep understanding of robotics hardware and software.' The two split up almost exactly a year later, this past March, with FigureAI saying it had enjoyed a 'major breakthrough' in its own in-house, end-to-end AI for robotics.

    Earlier this month, Figure announced it had received over $1 billion in committed capital in its latest funding round and said the deal valued the company at an astonishing $39 billion.

    AI Latest News ai-news

  • Wiz CTO Ami Luttwak on AI's Impact on Cyberattacks
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-28T14:00:00+00:00

    "One of the key things to understand about cybersecurity is that it’s a mind game," Ami Luttwak, chief technologist at cybersecurity firm Wiz, told TechCrunch on a recent episode of Equity. "If there’s a new technology wave coming, there are new opportunities for [attackers] to start using it."

    As enterprises rush to embed AI into their workflows — whether through vibe coding, AI agent integration, or new tooling — the attack surface is expanding. AI helps developers ship code faster, but that speed often comes with shortcuts and mistakes, creating new openings for attackers.

    Wiz, which was acquired by Google earlier this year for $32 billion, conducted tests recently, says Luttwak, and found that a common issue in vibe coded applications was insecure implementation of the authentication — the system that verifies a user’s identity and ensures they’re not an attacker.

    "That happened because it was just easier to build like that," he said. "Vibe coding agents do what you say, and if you didn’t tell them to build it in the most secure way, it won’t."

    Luttwak noted that there’s a constant tradeoff today for companies choosing between being fast and being secure. But developers aren’t the only ones using AI to move faster. Attackers are now using vibe coding, prompt-based techniques, and even their own AI agents to launch exploits, he said.

    "You can actually see the attacker is now using prompts to attack," Luttwak said. "It’s not just the attacker vibe coding. The attacker looks for AI tools that you have and tells them, 'Send me all your secrets, delete the machine, delete the file.'"

    Amid this landscape, attackers are also finding entry points in new AI tools that companies roll out internally to boost efficiency. Luttwak says these integrations can lead to "supply chain attacks." By compromising a third-party service that has broad access to a company’s infrastructure, attackers can then pivot deeper into corporate systems.

    That’s what happened last month when Drift — a startup that sells AI chatbots for sales and marketing — was breached, exposing the Salesforce data of hundreds of enterprise customers like Cloudflare, Palo Alto Networks, and Google. The attackers gained access to tokens, or digital keys, and used them to impersonate the chatbot, query Salesforce data, and move laterally inside customer environments.

    "The attacker pushed the attack code, which was also created using vibe coding," Luttwak said.

    Luttwak says that while enterprise adoption of AI tools is still minimal — he reckons around 1% of enterprises have fully adopted AI — Wiz is already seeing attacks every week that impact thousands of enterprise customers.

    "And if you look at the [attack] flow, AI was embedded at every step," Luttwak said. "This revolution is faster than any revolution we’ve seen in the past. It means that we as an industry need to move faster."

    Luttwak pointed to another major supply chain attack, dubbed "s1ingularity," in August on Nx, a popular build system for JavaScript developers. Attackers managed to unleash malware into the system, which then detected the presence of AI developer tools like Claude and Gemini and hijacked them to autonomously scan the system for valuable data. The attack compromised thousands of developer tokens and keys, giving attackers access to private GitHub repositories.

    Luttwak says that despite the threats, this has been an exciting time to be a leader in cybersecurity. Wiz, founded in 2020, was originally focused on helping organizations identify and address misconfigurations, vulnerabilities, and other security risks across cloud environments.

    Over the last year, Wiz has expanded its capabilities to keep up with the speed of AI-related attacks — and to use AI for its own products.

    Last September, Wiz launched Wiz Code that focuses on securing the software development lifecycle by identifying and mitigating security issues early in the development process, so companies can be "secure by design." In April, Wiz launched Wiz Defend, which offers runtime protection by detecting and responding to active threats within cloud environments.

    Luttwak said that it’s vital for Wiz to fully understand the applications of their customers if the startup is going to help with what he calls "horizontal security."

    "We need to understand why you’re building it … so I can build the security tool that no one has ever had before, the security tool that understands you," he said.

    ‘From day one, you need to have a CISO’

    The democratization of AI tools has resulted in a flood of new startups promising to solve enterprise pain points. But Luttwak says enterprises shouldn’t just send all of their company, employee, and customer data to "every small SaaS company that has five employees just because they say, 'Give me all your data, and I will give you amazing AI insights.'"

    Of course, those startups need that data if their offering is going to have any value. Luttwak says that means it’s incumbent upon them to make sure they’re operating like a secure organization from the start.

    "From day one, you need to think about security and compliance," he said. "From day one, you need to have a CISO (chief information security officer). Even if you have five people."

    Before writing a single line of code, startups should think like a highly secure organization, he said. They need to consider enterprise security features, audit logs, authentication, access to production, development practices, security ownership, and single sign-on. Planning this way from the start means you won’t have to overhaul processes later and incur what Luttwak calls "security debt." And if you aim to sell to enterprises, you’ll already be prepared to protect their data.

    "We were SOC2 compliant [a compliance framework] before we had code," he said. "And I can tell you a secret. Getting SOC2 compliance for five employees is much easier than for 500 employees."

    The next most important step for startups is to think about architecture, he said.

    "If you’re an AI startup that wants to focus on enterprise from day one, you have to think about an architecture that allows the data of the customer to stay … in the customer environment."

    For cybersecurity startups looking to step into the field in the age of AI, Luttwak says now’s the time. Everything from phishing protection and email security to malware and endpoint protection is fertile ground for innovation, both for attackers and defenders. The same is true for startups that could help with workflow and automation tools to do "vibe security," since many security teams still don’t know how to use AI to defend against AI.

    "The game is open," Luttwak said. "If every area of security now has new attacks, then it means we have to rethink every part of security."

    AI Latest News ai-news

  • The Billion-Dollar Infrastructure Deals Powering the AI Boom
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-28T18:07:00+00:00

    It takes a lot of computing power to run an AI product — and as the tech industry races to tap the power of AI models, there’s a parallel race underway to build the infrastructure that will power them. On a recent earnings call, Nvidia CEO Jensen Huang estimated that between $3 trillion and $4 trillion will be spent on AI infrastructure by the end of the decade — with much of that money coming from AI companies. Along the way, they’re placing immense strain on power grids and pushing the industry’s building capacity to its limit.

    Below, we’ve laid out everything we know about the biggest AI infrastructure projects, including major spending from Meta, Oracle, Microsoft, Google, and OpenAI. We’ll keep it updated as the boom continues and the numbers climb even higher.

    Microsoft’s $1 billion investment in OpenAI

    This is arguably the deal that kicked off the whole contemporary AI boom: In 2019, Microsoft made a $1 billion investment in a buzzy non-profit called OpenAI, known mostly for its association with Elon Musk. Crucially, the deal made Microsoft the exclusive cloud provider for OpenAI — and as the demands of model training became more intense, more of Microsoft’s investment started to come in the form of Azure cloud credit rather than cash.

    It was a great deal for both sides: Microsoft was able to claim more Azure sales, and OpenAI got more money for its biggest single expense. In the years that followed, Microsoft would build its investment up to nearly $14 billion — a move that is set to pay off enormously when OpenAI converts into a for-profit company.

    The partnership between the two companies has unwound more recently. In January, OpenAI announced it would no longer be using Microsoft’s cloud exclusively, instead giving the company a right of first refusal on future infrastructure demands but pursuing others if Azure couldn’t meet their needs. More recently, Microsoft began exploring other foundation models to power its AI products, establishing even more independence from the AI giant.

    OpenAI’s arrangement with Microsoft was so successful that it’s become a common practice for AI services to sign on with a particular cloud provider. Anthropic has received $8 billion in investment from Amazon, while making kernel-level modifications on the company’s hardware to make it better suited for AI training. Google Cloud has also signed on smaller AI companies like Lovable and Windsurf as “primary computing partners,” although those deals did not involve any investment. And even OpenAI has gone back to the well, receiving a $100 billion investment from Nvidia in September, giving it capacity to buy even more of the company’s GPUs.

    The rise of Oracle

    On June 30, 2025, Oracle revealed in an SEC filing that it had signed a $30 billion cloud services deal with an unnamed partner; this is more than the company’s cloud revenues for all of the previous fiscal year. OpenAI was eventually revealed as the partner, securing Oracle a spot alongside Google as one of OpenAI’s string of post-Microsoft hosting partners. Unsurprisingly, the company’s stock went shooting up.

    Techcrunch event

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    San Francisco|October 27-29, 2025

    REGISTER NOW

    A few months later, it happened again. On September 10, Oracle revealed a five-year, $300 billion deal for compute power, set to begin in 2027. Oracle’s stock climbed even higher, briefly making founder Larry Ellison the richest man in the world. The sheer scale of the deal is stunning: OpenAI does not have $300 billion to spend, so the figure presumes immense growth for both companies, and more than a little faith.

    But before a single dollar is spent, the deal has already cemented Oracle as one of the leading AI infrastructure providers — and a financial force to be reckoned with.

    Building tomorrow’s hyperscale data centers

    For companies like Meta that already have significant legacy infrastructure, the story is more complicated — although equally expensive. Mark Zuckerberg has said that Meta plans to spend $600 billion on U.S. infrastructure through the end of 2028.

    In just the first half of 2025, the company spent $30 billion more than the previous year, driven largely by the company’s growing AI ambitions. Some of that spending goes toward big ticket cloud contracts, like a recent $10 billion deal with Google Cloud, but even more resources are being poured into two massive new data centers.

    A new 2,250-acre site in Louisiana, dubbed Hyperion, will cost an estimated $10 billion to build out and provide an estimated 5 gigawatts of compute power. Notably, the site includes an arrangement with a local nuclear power plant to handle the increased energy load. A smaller site in Ohio, called Prometheus, is expected to come online in 2026, powered by natural gas.

    That kind of buildout comes with real environmental costs. Elon Musk’s xAI built its own hybrid data center and power-generation plant in South Memphis, Tennessee. The plant has quickly become one of the county’s largest emitters of smog-producing chemicals, thanks to a string of natural gas turbines that experts say violate the Clean Air Act.

    The Stargate moonshot

    Just two days after his second inauguration, President Trump announced a joint venture between SoftBank, OpenAI, and Oracle, meant to spend $500 billion building AI infrastructure in the United States. Named “Stargate” after the 1994 film, the project arrived with incredible amounts of hype, with Trump calling it “the largest AI infrastructure project in history. Sam Altman seemed to agree, saying, ​​”I think this will be the most important project of this era.”

    In broad strokes, the plan was for SoftBank to provide the funding, with Oracle handling the buildout with input from OpenAI. Overseeing it all was Trump, who promised to clear away any regulatory hurdles that might slow down the build. But there were doubts from the beginning, including from Elon Musk, Altman’s business rival, who claimed the project did not have the available funds.

    As the hype has died down, the project has lost some momentum. In August, Bloomberg reported that the partners were failing to reach consensus. Nonetheless, the project has moved forward with the construction of eight data centers in Abilene, Texas, with construction on the final building set to be finished by the end of 2026.

    This article was first published on September 22.

    AI Latest News ai-news

  • AI Services Transformation May Be Harder Than VCs Anticipate
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-29T01:20:12+00:00

    Venture capitalists have convinced themselves they’ve found the next big investing edge: using AI to wring software-like margins out of traditionally labor-intensive services businesses. The strategy involves acquiring mature professional services firms, implementing AI to automate tasks, then using the improved cash flow to roll up more companies.

    Leading the charge is General Catalyst (GC), which has dedicated $1.5 billion of its latest fundraise to what it calls a 'creation' strategy that’s focused on incubating AI-native software companies in specific verticals, then using those companies as acquisition vehicles to buy established firms — and their customers — in the same sectors. GC has placed bets across seven industries, from legal services to IT management, with plans to expand to up to 20 sectors altogether.

    'Services globally is a $16 trillion revenue a year globally,' said Marc Bhargava, who leads GC’s related efforts, in a recent interview with TechCrunch. 'In comparison, software is only $1 trillion globally,' he noted, adding that the allure of software investing has always been its higher margins. 'As you get software to scale, there’s very little marginal cost and there’s a great deal of marginal revenue.'

    If you can automate services business, too, he said – tackling 30% to 50% of those companies with AI, and even automating up to 70% of those core tasks in the case of call centers – the math begins to look irresistible.

    The game plan seems to be working. Take Titan MSP, one of General Catalyst’s portfolio companies. The investment firm provided $74 million over two tranches to help the company develop AI tools for managed service providers, then it acquired RFA, a well-known IT services firm. Through pilot programs, says Bhargava, Titan demonstrated it could automate 38% of typical MSP tasks. The company now plans to use its improved margins to acquire additional MSPs in a classic roll-up strategy.

    Similarly, the firm incubated Eudia, which focuses on in-house legal departments rather than law firms. Eudia has signed up Fortune 100 clients including Chevron, Southwest Airlines, and Stripe, offering fixed-fee legal services powered by AI rather than traditional hourly billing. The company recently acquired Johnson Hanna, an alternative legal service provider, to expand its reach.

    General Catalyst looks to double – at least – the EBITDA margin of those companies that it’s acquiring, Bhargava explained.

    Techcrunch event

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    San Francisco|October 27-29, 2025

    REGISTER NOW

    The powerhouse firm isn’t alone in its thinking. The venture firm Mayfield has carved out $100 million specifically for 'AI teammates' investments, including Gruve, an IT consulting startup that acquired a $5 million security consulting company, then grew it to $15 million in revenue within six months while achieving an 80% gross margin, according to its founders.

    'If 80% of the work will be done by AI, it can have an 80% to 90% gross margin,' Navin Chaddha, Mayfield’s managing director, told TechCrunch this summer. 'You could have blended margins of 60% to 70% and produce 20% to 30% net income.'

    Solo investor Elad Gil has been pursuing a similar strategy for three years, backing companies that acquire mature businesses and transform them with AI. 'If you own the asset, you can [transform it] much more rapidly than if you’re just selling software as a vendor,' Gil said in an interview with TechCrunch this spring.

    But early warning signs suggest this whole services-industry metamorphosis may be more complicated than VCs anticipate. A recent study by researchers at Stanford Social Media Lab and BetterUp Labs that surveyed 1,150 full-time employees across industries found that 40% of those employees are having to shoulder more work because of what the researchers call 'workslop' – AI-generated work that appears polished but lacks substance, creating more work (and headaches) for colleagues.

    The trend is taking a toll on the organizations. Employees involved in the survey say they’re spending an average of nearly two hours dealing with each instance of workslop, including to first decipher it, then decide whether or not to send it back, and oftentimes just to fix it themselves.

    Based on those participants’ estimates of time spent, along with their self-reported salaries, the authors of the survey estimate that workslop carries an invisible tax of $186 per month per person. 'For an organization of 10,000 workers, given the estimated prevalence of workslop . . .this yields over $9 million per year in lost productivity,' they write in a new Harvard Business Review article.

    Bhargava disputed the notion that AI is overhyped, arguing instead that all these implementation failures actually validate General Catalyst’s approach. 'I think it kind of shows the opportunity, which is, it’s not easy to apply AI technology to these businesses,' he said. 'If all the Fortune 100 and all these folks could just bring in a consulting firm, slap on some AI, get a contract with OpenAI, and transform their business, then obviously our thesis [would be] a little bit less robust. But the reality is, it’s really hard to transform a company with AI.'

    He pointed to the technical sophistication required in AI as the most critical missing puzzle piece. 'There’s a lot of different technology. It’s good at different things,' he said. 'You really need these applied AI engineers from places like Rippling and Ramp and Figma and Scale, who have worked with the different models, understand their nuances, understand which ones are good for what, understand how to wrap it in software.' That complexity is exactly why General Catalyst’s strategy of pairing AI specialists with industry experts to build companies from the ground up makes sense, he argued.

    Still, there’s no denying that workslop threatens to undermine — to some extent — the strategy’s core economics. Even if a holding company is created as starting point, if the acquired companies reduce staff as the AI efficiency thesis suggests they should, they’ll have fewer people available to catch and correct AI-generated errors. If the companies maintain current staffing levels to handle the additional work created by problematic AI output, the huge margin gains that VCs are counting on might never be realized.

    Arguably, these scenarios should perhaps slow the scaling plans that are central to the VCs’ roll-up strategies and that potentially undermine the numbers that make these deals attractive to them. But let’s face it; it will take more than a study or two to slow down most Silicon Valley investors.

    In fact, because they typically acquire businesses with existing cash flow, General Catalyst says its 'creation strategy' companies are already profitable — a marked departure from the traditional VC playbook of backing high-growth, cash-burning startups. It’s also likely a welcome change for the limited partners behind venture firms, who have bankrolled years of losses at companies that never reached profitability.

    'As long as AI technology continues to improve, and we see this massive investment and improvement in the models,' Bhargava said, 'I think there’ll just be more and more industries for us to help incubate companies.'

    AI Latest News ai-news

  • Governing the Age of Agentic AI: Balancing Autonomy and Accountability
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-24T08:19:42+00:00

    Author: Rodrigo Coutinho, Co-Founder and AI Product Manager at OutSystems

    AI has moved beyond pilot projects and future promises. Today, it’s embedded in industries, with more than three-quarters of organisations (78%) now using AI in at least one business function. The next leap, however, is agentic AI: systems that don’t just provide insights or automate narrow tasks but operate as autonomous agents, capable of adapting to changing inputs, connecting with other systems, and influencing business-critical decisions. Although these agents will deliver greater value, agentic AI also poses challenges.

    Imagine agents that proactively resolve customer issues in real-time or adapt applications dynamically to meet shifting business priorities. The greater autonomy inevitably brings new risks. Without the right safeguards, AI agents may drift from their intended purpose or make choices that clash with business rules, regulations, or ethical standards. Navigating this new era requires stronger oversight, where human judgement, governance frameworks, and transparency are built-in from the start. The potential of agentic AI is vast but so are the obligations that come with deployment. Low-code platforms offer one path forward, serving as a control layer between autonomous agents and enterprise systems. By embedding governance and compliance into development, they give organisations the confidence that AI-driven processes will advance strategic goals without adding unnecessary risk.

    Designing safeguards instead of code for agentic AI

    Agentic AI marks a steep change in how people interact with software. It’s indicative of a fundamental shift in the relationship between people and software. Traditionally, developers have focused on building applications with clear requirements and predictable outputs. Now, instead of fragmented applications, teams will orchestrate entire ecosystems of agents that interact with people, systems and data.

    As these systems mature, developers shift from writing code line by line to defining the safeguards that steer them. Because these agents adapt and may respond differently to the same input, transparency and accountability must be built in from the start. By embedding oversight and compliance into design, developers ensure AI-driven decisions stay reliable, explainable and aligned with business goals. The change demands that developers and IT leaders embrace a broader supervisor role, guiding both technological and organisational change over time.

    Why transparency and control matter in agentic AI

    Greater autonomy exposes organisations to additional vulnerabilities. According to a recent OutSystems study, 64% of technology leaders cite governance, trust and safety as top concerns when deploying AI agents at scale. Without strong safeguards, these risks extend beyond compliance gaps to include security breaches and reputational damage. Opacity in agentic systems makes it difficult for leaders to understand or validate decisions, eroding confidence internally and with customers, leading to concrete risks.

    Left unchecked, autonomous agents can blur accountability, widen the attack surface and create inconsistency at scale. Without visibility into why an AI system acts, organisations risk losing accountability in critical workflows. At the same time, agents that interact in sensitive data and systems expand the attack surface for cyber threats, while un-monitored “agent sprawl” can create redundancy, fragmentation and inconsistent decisions. Together, these challenges underscore the need for strong governance frameworks that maintain trust and control as autonomy scales.

    Scaling AI safely with low-code foundations

    Crucially, adopting agentic AI need not involve rebuilding governance from the ground up. Organisations have multiple approaches available to them, including low-code platforms, which offer a reliable, scalable framework where security, compliance and governance are already part of the development fabric.

    Across enterprises, IT teams are being asked to embed agents into operations without disrupting what already works. With the right frameworks, IT teams can deploy AI agents directly into enterprise-wide operations without disrupting current workflows or re-architecting core systems. Organisations have full control over how AI agents operate at every step, ultimately building trust to scale confidently in the enterprise.

    Low-code places governance, security and scalability at the heart of AI adoption. By unifying app and agent development in a single environment, it is easier to embed compliance and oversight from the start. The ability to integrate seamlessly in enterprise systems, combined with built-in DevSecOps practices, ensures that vulnerabilities are addressed before deployment. And with out-of-the-box infrastructure, organisations can scale confidently without having to reinvent foundational elements of governance or security.

    The approach lets organisations pilot and scale agentic AI while keeping compliance and security intact. Low-code makes it easier to deliver with speed and security, giving developers and IT leaders confidence to progress.

    Smarter oversight for smarter systems

    Ultimately, low-code provides a dependable route to scaling autonomous AI while preserving trust. By unifying app and agent development in one environment, low-code embeds compliance and oversight from the start. Seamless integration in systems and built-in DevSecOps practices help address vulnerabilities before deployment, while ready-made infrastructure enables scale without reinventing governance from scratch. For developers and IT leaders, this shift means moving beyond writing code to guiding the rules and safeguards that shape autonomous systems. In a fast-changing landscape, low-code provides the flexibility and resilience needed to experiment confidently, embrace innovation early, and maintain trust as AI grows more autonomous.

    Author: Rodrigo Coutinho, Co-Founder and AI Product Manager at OutSystems

    (Image by Alexandra_Koch)

    See also: Agentic AI: Promise, scepticism, and its meaning for Southeast Asia

    Image 1
    Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Latest News ai-news

  • OpenAI and Nvidia Plan $100B Chip Deal to Shape AI Future
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-24T10:00:00+00:00

    OpenAI and Nvidia have signed a letter of intent for a $100B partnership that could reshape how AI systems are trained and deployed. The plan calls for at least 10 gigawatts of Nvidia hardware to support OpenAI’s next-generation AI infrastructure, which will train and run future models aimed at superintelligence.

    To support the rollout, Nvidia intends to invest up to $100 billion in OpenAI as the systems are deployed. The first phase is scheduled to go live in the second half of 2026, powered by Nvidia’s upcoming Vera Rubin platform.

    A deal with wide implications

    The agreement shows how closely tied the largest AI players are becoming. Nvidia, the main supplier of AI chips, would gain a financial stake in OpenAI, one of its biggest customers. For OpenAI, the deal brings both funding and guaranteed access to Nvidia’s sought-after processors.

    The move could unsettle rivals. Some may see it as reinforcing Nvidia’s dominance in chips and OpenAI’s lead in AI software, raising questions about fair competition.

    A person familiar with the matter said the partnership involves two linked steps: Nvidia will buy non-voting shares in OpenAI, and OpenAI will then use that money to purchase Nvidia chips.

    OpenAI on why compute drives AI growth

    “Everything starts with compute,” OpenAI CEO Sam Altman said in a statement. “Compute infrastructure will be the basis for the economy of the future, and we will utilise what we’re building with Nvidia to both create new AI breakthroughs and empower people and businesses with them at scale.”

    The companies said details of the partnership will be settled in the coming weeks. They also noted that 10 gigawatts of chips would consume as much power as more than 8 million US households.

    Nvidia’s stock climbed as much as 4.4% to a record high after the news. Oracle, which is working with OpenAI, SoftBank, and Microsoft on a $500 billion global AI data centre project called Stargate, rose about 6%.

    How the deal is structured

    According to the person familiar with the talks, once a final agreement is reached, OpenAI will formally purchase Nvidia systems. Nvidia will then invest an initial $10 billion in OpenAI, which was last valued at $500 billion.

    The first delivery of Nvidia hardware is expected in late 2026, with one gigawatt of computing power coming online in the second half of that year on the Vera Rubin platform.

    Analysts welcomed the agreement but raised concerns about whether some of Nvidia’s investment could flow back to it through OpenAI’s chip purchases.

    “On the one hand this helps OpenAI deliver on what are some very aspirational goals for compute infrastructure, and helps Nvidia ensure that that stuff gets built. On the other hand the ‘circular’ concerns have been raised in the past, and this will fuel them further,” said Stacy Rasgon, an analyst at Bernstein.

    OpenAI’s other AI chip ambitions

    OpenAI, like Google and Amazon, has been exploring its own custom chips to lower costs and reduce dependence on Nvidia. A person close to the company said this deal does not change its existing compute plans, including its collaboration with Microsoft.

    Earlier this year, Reuters reported that OpenAI was working with Broadcom and Taiwan Semiconductor Manufacturing Co. to design chips. Following news of the Nvidia partnership, Broadcom shares slipped 0.8%.

    OpenAI has grown to more than 700 million weekly active users, with adoption across businesses of all sizes and by developers worldwide. The Nvidia partnership is expected to help the company push forward on its goal of building artificial general intelligence.

    Industry backdrop

    The OpenAI-Nvidia pact adds to a growing list of alliances among tech giants. Microsoft has invested billions in OpenAI since 2019. Nvidia recently announced a chip collaboration with Intel and pledged $5 billion in funding. Nvidia also took part in OpenAI’s $6.6 billion round in October 2024.

    The size of the new deal could draw antitrust attention. Last year, the Justice Department and Federal Trade Commission reached an agreement to allow closer scrutiny of Microsoft, OpenAI, and Nvidia’s roles in the AI sector. So far, the Trump administration has taken a lighter approach than the Biden administration on competition issues.

    OpenAI and Microsoft also said earlier this month they had signed a non-binding agreement to restructure OpenAI into a for-profit company, signalling further governance changes.

    Antitrust lawyer Andre Barlow from Doyle, Barlow & Mazard said the Nvidia deal may reinforce both companies’ positions in ways that limit rivals.

    “The deal could change the economic incentives of Nvidia and OpenAI as it could potentially lock in Nvidia’s chip monopoly with OpenAI’s software lead. It could potentially make it more difficult for Nvidia competitors like AMD in chips or OpenAI’s competitors in models to scale,” Barlow said.

    He added that the Trump administration has so far taken a pro-business approach to regulation, removing barriers that could slow AI growth.

    (Image by Nvidia)

    See also: Thinking Machines becomes OpenAI’s first services partner in APAC

    Image 1
    Want to learn more about AI and big data from industry leaders? Check outAI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, clickhere for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Latest News ai-news

  • Generative AI in Retail: High Security Costs Amid Rapid Adoption
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-24T16:16:39+00:00

    The retail industry is among the leaders in generative AI adoption, but a new report highlights the security costs that accompany it.

    According to cybersecurity firm Netskope, the retail sector has all but universally adopted the technology, with 95% of organisations now using generative AI applications. That’s a huge jump from 73% just a year ago, showing just how fast retailers are scrambling to avoid being left behind.

    However, this AI gold rush comes with a dark side. As organisations weave these tools into the fabric of their operations, they are creating a massive new surface for cyberattacks and sensitive data leaks.

    The report’s findings show a sector in transition, moving from chaotic early adoption to a more controlled, corporate-led approach. There’s been a shift away from staff using their personal AI accounts, which has more than halved from 74% to 36% since the beginning of the year. In its place, usage of company-approved GenAI tools has more than doubled, climbing from 21% to 52% in the same timeframe. It’s a sign that businesses are waking up to the dangers of "shadow AI" and trying to get a handle on the situation.

    In the battle for the retail desktop, ChatGPT remains king, used by 81% of organisations. Yet, its dominance is not absolute. Google Gemini has made inroads with 60% adoption, and Microsoft’s Copilot tools are hot on its heels at 56% and 51% respectively. ChatGPT’s popularity has recently seen its first-ever dip, while Microsoft 365 Copilot’s usage has surged, likely thanks to its deep integration with the productivity tools many employees use every day.

    Beneath the surface of this generative AI adoption by the retail industry lies a growing security nightmare. The very thing that makes these tools useful – their ability to process information – is also their biggest weakness. Retailers are seeing alarming amounts of sensitive data being fed into them.

    The most common type of data exposed is the company’s own source code, making up 47% of all data policy violations in GenAI apps. Close behind is regulated data, like confidential customer and business information, at 39%.

    In response, a growing number of retailers are simply banning apps they deem too risky. The app most frequently finding itself on the blocklist is ZeroGPT, with 47% of organisations banning it over concerns it stores user content and has even been caught redirecting data to third-party sites.

    This newfound caution is pushing the retail industry towards more serious, enterprise-grade generative AI platforms from major cloud providers. These platforms offer far greater control, allowing companies to host models privately and build their own custom tools.

    Both OpenAI via Azure and Amazon Bedrock are tied for the lead, with each being used by 16% of retail companies. But these are no silver bullets; a simple misconfiguration could inadvertently connect a powerful AI directly to a company’s crown jewels, creating the potential for a catastrophic breach.

    The threat isn’t just from employees using AI in their browsers. The report finds that 63% of organisations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.

    This AI-specific risk is part of a wider, troubling pattern of poor cloud security hygiene. Attackers are increasingly using trusted names to deliver malware, knowing that an employee is more likely to click a link from a familiar service. Microsoft OneDrive is the most common culprit, with 11% of retailers hit by malware from the platform every month, while the developer hub GitHub is used in 9.7% of attacks.

    The long-standing problem of employees using personal apps at work continues to pour fuel on the fire. Social media sites like Facebook and LinkedIn are used in nearly every retail environment (96% and 94% respectively), alongside personal cloud storage accounts. It’s on these unapproved personal services that the worst data breaches happen. When employees upload files to personal apps, 76% of the resulting policy violations involve regulated data.

    For security leaders in retail, casual generative AI experimentation is over. Netskope’s findings are a warning that organisations must act decisively. It’s time to gain full visibility of all web traffic, block high-risk applications, and enforce strict data protection policies to control what information can be sent where.

    Without adequate governance, the next innovation could easily become the next headline-making breach.

    See also:Martin Frederik, Snowflake: Data quality is key to AI-driven growth

    Image 1: Banner for the AI & Big Data Expo event series.
    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Latest News ai-news

  • Huawei's Plan to Unite Thousands of AI Chips as One Supercomputer
    baoshi.raoB baoshi.rao

    Published Time: 2025-09-25T09:23:08+00:00

    Imagine connecting thousands of powerful AI chips scattered in dozens of server cabinets and making them work together as if they were a single, massive computer. That is exactly what Huawei demonstrated at HUAWEI CONNECT 2025, where the company unveiled a breakthrough in AI infrastructure architecture that could reshape how the world builds and scales artificial intelligence systems.

    Instead of traditional approaches where individual servers work somewhat independently, Huawei’s new SuperPoD technology creates what the company’s executives describe as a single logical machine made from thousands of separate processing units, allowing them, or it, to “learn, think, and reason as one.”

    The implications extend beyond impressive technical specifications, representing a shift in how AI computing power can be organised, scaled, and deployed in industries.

    The technical foundation: UnifiedBus 2.0

    At the core of Huawei’s infrastructure approach is UnifiedBus (UB). Yang Chaobin, Huawei’s Director of the Board and CEO of the ICT Business Group, explained that “Huawei has developed the groundbreaking SuperPoD architecture based on our UnifiedBus interconnect protocol. The architecture deeply interconnects physical servers so that they can learn, think, and reason like a single logical server.”

    The technical specifications reveal the scope of this achievement. The UnifiedBus protocol addresses two challenges that, historically, have limited large-scale AI computing: the reliability of long-range communications and bandwidth-latency. Traditional copper connections provide high bandwidth but only over short distances, typically connecting perhaps two cabinets.

    Optical cables support longer range but suffer from reliability issues that become more problematic the greater the distance and scale. Eric Xu, Huawei’s Deputy Chairman and Rotating Chairman, said that solving these fundamental connectivity challenges was essential to the company’s AI infrastructure strategy.

    Xu detailed the breakthrough solutions in terms of the OSI model: “We have built reliability into every layer of our interconnect protocol, from the physical layer and data link layer, all the way up to the network and transmission layers. There is 100-ns-level fault detection and protection switching on optical paths, making any intermittent disconnections or faults of optical modules imperceptible at the application layer.”

    SuperPoD architecture: Scale and performance

    The Atlas 950 SuperPoD represents the flagship implementation of this architecture, comprising of up to 8,192 Ascend 950DT chips in a configuration that Xu described as delivering “8 EFLOPS in FP8 and 16 EFLOPS in FP4. Its interconnect bandwidth will be 16 PB/s. This means that a single Atlas 950 SuperPoD will have an interconnect bandwidth over 10 times higher than the entire globe’s total peak internet bandwidth.”

    The specifications are more than incremental improvements. The Atlas 950 SuperPoD occupies 160 cabinets in 1,000m 2, with 128 compute cabinets and 32 comms cabinets linked with all-optical interconnects. The system’s memory capacity reaches 1,152 TB and maintains what Huawei claims is 2.1-microsecond latency in the entire system.

    Later in the production pipeline will be the Atlas 960 SuperPoD, which is set to incorporate 15,488 Ascend 960 chips in 220 cabinets covering 2,200m 2. Xu said it will deliver “30 EFLOPS in FP8 and 60 EFLOPS in FP4, and come with 4,460 TB of memory and 34 PB/s interconnect bandwidth.”

    Beyond AI: General-purpose computing applications

    The SuperPoD concept extends beyond AI workloads into general-purpose computing through the TaiShan 950 SuperPoD. Built on Kunpeng 950 processors, this system addresses enterprise challenges in replacing legacy mainframes and mid-range computers.

    Xu positioned this as particularly relevant for the finance sector, where “the TaiShan 950 SuperPoD, combined with the distributed GaussDB, can serve as an ideal alternative, and replace — once and for all — mainframes, mid-range computers, and Oracle’s Exadata database servers.”

    Open architecture strategy

    Perhaps most significantly for the broader AI infrastructure market, Huawei announced the release of UnifiedBus 2.0 technical specifications as open standards. The decision reflects both strategic positioning and practical constraints.

    Xu acknowledged that “the Chinese mainland will lag behind in semiconductor manufacturing process nodes for a relatively long time” and emphasised that “sustainable computing power can only be achieved with process nodes that are practically available.”

    Yang framed the open approach as ecosystem building: “We are committed to our open-hardware and open-source-software approach that will help more partners develop their own industry-scenario-based SuperPoD solutions. This will accelerate developer innovation and foster a thriving ecosystem.”

    The company is to open-source hardware and software components, with hardware including NPU modules, air-cooled and liquid-cooled blade servers, AI cards, CPU boards, and cascade cards. For software, Huawei committed to fully open-sourcing CANN compiler tools, Mind series application kits, and openPangu foundation models by 31 December 2025.

    Market deployment and ecosystem impact

    Real-world deployment provides validation for these technical claims. Over 300 Atlas 900 A3 SuperPoD units have already been shipped in 2025, which have been deployed for more than 20 customers from multiple sectors, including the Internet, finance, carrier, electricity, and manufacturing sectors.

    The implications for the development of China’s AI infrastructure are substantial. By creating an open ecosystem around domestic technology, Huawei is addressing the challenges of building competitive AI infrastructure inside parameters set by constrained semiconductor manufacturing and availability. Its approach enables broader industry participation in developing AI infrastructure solutions without needing access to the most advanced process nodes.

    For the global AI infrastructure market, Huawei’s open architecture strategy introduces an alternative to the tightly integrated, proprietary hardware and software approach dominant among Western competitors. Whether the ecosystem proposed by Huawei can achieve comparable performance and maintain commercial viability remains to be demonstrated at scale.

    Ultimately, the SuperPoD architecture represents more than an incremental advance for AI computing. Huawei is proposing a fundamental of how massive computational resources are connected, managed, and scaled. The open-source release of its specifications and elements will test whether collaborative development can accelerate AI infrastructure innovation in an ecosystem of partners. That has the potential to reshape competitive dynamics in the global AI infrastructure market.

    See also:****Huawei commits to training 30,000 Malaysian AI professionals as local tech ecosystem expands

    Image 1: Banner for the AI & Big Data Expo event series.
    Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

    AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

    AI Latest News ai-news

Member List

baoshi.raoB baoshi.rao
  • Login

  • Don't have an account? Register

  • Login or register to search.
  • First post
    Last post
0
  • Categories
  • Newsletter
  • Recent
  • AI Insights
  • Tags
  • Popular
  • World
  • Groups