News

Top 10 GenAI Apps Most Often Blocked by Organizations

A new survey-based report from cloud security specialist Netskope explores cybersecurity threats, including the dangers of GenAI and the related apps that are most often blocked from employee use by organizations.

"While 94% of organizations are using GenAI apps, more than 99% of organizations have controls in place to mitigate the risks that GenAI apps pose," says the company's latest study: Cloud and Threat Report: January 2025.

It explores key trends in four areas of cybersecurity risks facing organizations worldwide -- adversarial risk, social engineering risk, personal app risk, and GenAI app risk -- highlighting the strategies organizations use to manage these risks.

It is the latter trend that provides the list of most-blocked apps.

"Blocks are an effective strategy for apps that serve no business purposes and should never be used under any circumstances," the report says. "On average, the number of GenAI apps blocked per organization has remained steady over the past year and is currently at 2.4 apps per organization. By contrast, there has been a significant increase in the number of apps blocked by the top 25% of organizations, where the number of blocked apps more than doubled from 6.3 to 14.6 over the past year.

"The industries that block the most apps are the highly regulated banking, financial services, healthcare, and telecommunications industries. At the other end of the spectrum, the manufacturing, retail, and technology industries block the fewest GenAI apps on average."

Number of Apps Blocked
[Click on image for larger view.] Number of Apps Blocked (source: Netskope).

Meanwhile, the list of the top 10 most-blocked AI apps cuts across multiple categories, including writing assistants, chatbots, image generators, and audio generators. While Netskope said the list has remained mostly the same since the summer of 2024, one exception is Perplexity AI, which the company said has become less commonly blocked as it has grown in popularity.

Most-Blocked Apps
[Click on image for larger view.] Most-Blocked Apps (source: Netskope).

The report didn't explore why they were blocked. Organizations might block GenAI or other apps for any number of reasons, and inclusion on Netskope's list shouldn't be considered as commentary on or a reflection of any apps or their functionality. With that in mind, here's a list of the apps, with brief functionality descriptions (not provided in the report):

  1. QuillBot: A writing assistant designed to improve and streamline text creation. It specializes in rephrasing content while preserving its original meaning, offering various modes tailored for different writing goals, such as enhancing clarity, creativity, or formal tone.
  2. Beautiful.ai: A a presentation design tool aimed at simplifying the process of creating visually appealing slides. It automates much of the design work by offering pre-designed templates and layouts that adjust dynamically as content is added.
  3. AiChatting: A chatbot platform designed to facilitate conversational interactions for various purposes. It enables businesses and individuals to create automated chat experiences, providing assistance, answering queries, or simulating human-like dialogue.
  4. Pixlr: An online photo editing platform that offers a range of tools for creating and enhancing images. It provides features like background removal, filters, overlays, and advanced editing capabilities, all within a user-friendly interface.
  5. Tactiq: A tool designed to capture, transcribe, and summarize conversations from virtual meetings. It integrates seamlessly with platforms like Zoom, Google Meet, and Microsoft Teams, automatically generating accurate real-time transcriptions.
  6. Writesonic: A writing assistant designed to help create high-quality content quickly and efficiently. It generates various types of content, including blog posts, social media captions, product descriptions, emails, and more. The tool provides customizable templates and uses natural language processing to produce text that aligns with user inputs and tone preferences.
  7. DeepAI: A platform that provides a suite of tools and APIs for generating and enhancing visual and textual content. It offers features like image generation, text-to-image creation, and text analysis, making it accessible to developers, researchers, and creatives. The platform supports tasks such as sentiment analysis, image recognition, and natural language processing.
  8. ElevenLabs: A platform specializing in realistic text-to-speech and voice synthesis. It converts written text into lifelike speech, offering a variety of voices, accents, and languages. Users can customize voice tones or even create unique, personalized voices for tailored applications.
  9. Craiyon: A tool designed for generating images from text prompts. Formerly known as "DALL·E Mini," it allows users to input descriptive phrases, and the AI creates corresponding visual representations. Craiyon is accessible through a web interface and offers a simple, user-friendly experience for creating art, concept visuals, or experimental designs.
  10. PoeAI: A platform that provides access to a range of AI chatbots and tools for conversational interactions. It allows users to explore and interact with multiple AI models, such as those designed for answering questions, generating creative content, or providing technical assistance.

While Netskope didn't delve too deeply into the practice of organizations blocking GenAI apps, it has has become a growing trend driven by a combination of security, legal, ethical, and operational concerns. This practice is not universal but is common in industries that handle sensitive data, intellectual property, or highly regulated information.

More specifically, reasons an organization might block an app include:

  • Data Security and Privacy Risks:
    • Sensitive Data Exposure: Employees may inadvertently input sensitive or proprietary information into GenAI apps, risking exposure of confidential data.
    • Cloud Processing: Many GenAI apps process data in the cloud, which can create vulnerabilities if data is intercepted or mishandled.
    • Unknown Storage Policies: Companies may not trust how apps handle or store user inputs, fearing that data could be retained or misused.
  • Intellectual Property Concerns:
    • Loss of Control: Generative AI models may retain data patterns from user inputs, which could potentially lead to misuse or duplication of proprietary information.
    • Copyright Issues: Using GenAI outputs might infringe on third-party intellectual property rights, exposing the company to legal risks.
  • Compliance and Regulatory Requirements:
    • Industry Regulations: Certain industries, such as healthcare, finance, or defense, have strict regulations about how data can be handled and shared.
    • Global Data Protection Laws: Compliance with laws like GDPR, HIPAA, or CCPA may necessitate blocking tools that don't meet legal standards.
  • Productivity Concerns:
    • Non-Work Usage: Employees might use these tools for personal purposes, reducing work efficiency.
    • Quality Control: Over-reliance on AI-generated content could result in reduced human critical thinking or substandard work output.
  • Ethical Considerations:
    • Bias in Outputs: GenAI systems can produce biased or inappropriate outputs, which might reflect poorly on the organization.
    • Misuse Risk: Generative AI tools could be misused for unethical purposes, such as creating misleading information or deepfakes.
  • Netskope also provided some blocking recommendations:

    • Block access to apps that do not serve any legitimate business purpose or that pose a disproportionate risk to the organization. A good starting point is a policy to allow reputable apps currently in use while blocking all others.
    • Block downloads from apps and instances that are not used in your organization to reduce your risk surface to only those apps and instances that are necessary for the business.
    • Block uploads to apps and instances that are not used in your organization to reduce the risk of accidental or deliberate data exposure from insiders or abuse by attackers.

    Organizations use a variety of methods to block apps, including network restrictions, device-level controls, policy implementations, backed by monitoring and enforcement.

    While some organizations tout enhanced security, compliance and control over data achieved through GenAI app blocking, others are concerned the practice could limit innovation, stifle employee creativity, and lead to frustration or workarounds, such as using personal devices. The latter has been called Bring Your Own AI or Shadow AI.

    Netskope said the report is based on anonymized usage data collected by the Netskope One platform with prior authorization from a subset of Netskope customers. The findings focus on threat detections identified by Netskope's Next Generation Secure Web Gateway (SWG). The report does not evaluate the specific impact of individual threats but instead provides aggregate statistics and insights.

    The data spans the period from Nov. 1, 2023, to Nov. 30, 2024, and reflects trends in attacker tactics, user behavior, and organizational policy. The analysis is derived from real-world usage and provides an overview of threat protection activity across millions of users.

    About the Author

    David Ramel is an editor and writer at Converge 360.

    Featured

    Subscribe on YouTube