The latest AI trends: Slack API, Large Scale Language Models (LLMs), and the future of our data privacy
Hello, I'm John, a veteran blogger. AI technology is evolving every day and is about to bring about major changes in our work and life. Recently, you may have heard more and more keywords such as "Slack API", "Large-scale language model (LLM)", and "Data privacy". However, many people may feel that it is difficult to understand. Don't worry! In this article, we will explain in an easy-to-understand manner for beginners how these technologies are related to our familiar tools, especially the communication platform Slack, and most importantly, "Is our data safe?" Recently, Salesforce, the parent company of Slack, announced a major change regarding the use of Slack data for learning AI (especially LLM). Let's take a look at what this means!
The basics: What are the Slack API, LLM, and data privacy? And what's changed?
First, let's look at the basic meanings of words.
- Slack API:API stands for "Application Programming Interface".A "gateway" or "bridge" for different software to exchange informationIt's like this. With the Slack API, you can automatically connect Slack to other tools and services (for example, calendar apps and project management tools) to receive notifications and share information on Slack.
- Large Language Models (LLM):this is,AI (artificial intelligence) that can understand natural language and generate sentences like humans doIt is a type of LLM. ChatGPT, which you may already know, is one example of an LLM. By learning from large amounts of text data, it can perform a variety of tasks, such as answering questions, summarizing text, and translating.
- Data privacy:this is,How personal information should be collected, used and protectedThis is the idea behind it. It is a set of rules and rights that protect the information we provide when using internet services and the data we create within those services (such as the contents of conversations on Slack) from being used or leaked by companies without permission.
What is the problem and what is being solved?
LLM is very smart, but to get that intelligence, it needs a huge amount of "learning data." And communication tools like Slack store a lot of valuable information, such as daily work contacts, project discussions, and idea sharing. If this data on Slack is used to train LLM, it may be possible to create a smarter AI assistant specialized for specific tasks. For example, an AI that has learned about past interactions within the company can answer questions from new employees and search for related information.
But there is a big problem here.Data Privacy and SecurityConversations in Slack often contain confidential or personal information. There were concerns that this information could be used unintentionally for LLM study or leaked to the outside. In particular, there was strong resistance to the idea of large amounts of company Slack data being shared with an LLM developed by an external company.
Therefore, Slack's parent company, Salesforce, will update the Slack API terms of use on May 2024, 5,Bulk export of data via the Slack API and use of such data for LLM study is expressly prohibited.This is a move to protect user data privacy and provide stricter control over how data in Slack is used. This change will have a significant impact on how third-party AI apps (AI tools made by external companies that work with Slack) can handle Slack data.
Unique Features: What's New
Here are some notable changes to Salesforce's Slack API terms of service:
- Prohibition on use of data for LLM study purposes: Data accessed through the Slack API can no longer be used to train LLM. This represents a major change in policy for many AI development companies and companies looking to develop their own in-house AI.
- Bulk data export limitations: Bulk export of Slack data via API is now prohibited, making it harder for companies to ingest Slack data in bulk into external LLMs.
- Directing you to the new Real-Time Search APIInstead, Slack seems to have decided to offer a new mechanism called the "Real-Time Search API" to strengthen the search function within Slack. This is thought to be a move to encourage the use of AI within the Slack platform, rather than releasing data to the outside.
These changes are part of a broader discussion about how AI should use enterprise data, as companies must balance how they can harness the power of AI while protecting their valuable data.
The technical details: what's changed and why is it important?
At the heart of these changes are the "API" and "LLM" as well as how the "data" surrounding them is handled. Let's take a closer look.
The role of the Slack API and how it has been used
As mentioned above, APIs enable software integration. The Slack API was a very powerful tool for developers. For example, it was possible to use it to:
- Developing a custom notification bot: Notify when a message containing a specific keyword is posted, aggregate error logs in Slack, etc.
- Workflow automationTyping a specific command in Slack could create a task in a project management tool, update a customer database, etc.
- Data Analysis and ReportingAnalyze communication patterns in Slack to measure team engagement or understand the volume of discussion around a particular topic.
- Developing an AI assistant: And this is the area that will be greatly affected by this change, but it is an attempt to collect conversation data within Slack and use that data to develop an AI chatbot that is knowledgeable about internal company information.
Until now, some companies and developers have been trying to build smarter AI applications by pulling data from Slack through this API and using it as training data for LLM.
Why LLMs "want" your Slack data
The performance of the LLM is greatly affected by the quality and quantity of the data it learns from. Although LLMs learned from general website information and book data are very good, learning from "real" information on specific companies and industries will enable more specialized and contextualized responses.
Slack is brimming with this kind of "live" information.
- Daily business communicationWho is talking about what and what language is being used.
- Project Progress: Challenges, solutions, and decision-making processes.
- In-house knowledge: Past troubleshooting, success stories, and sharing of expertise.
By training LLM with these data, it may be possible to create an AI that can generate answers with a good understanding of the company's internal circumstances, even for specific questions such as, "Please tell me the solution to problem B that occurred in project A last year." This is the reason why many companies want to use Slack data to train LLM.
Challenges from a data privacy perspective
However, this use of Slack as learning data raises significant privacy concerns, as Slack conversations may contain the following information:
- Personal Information: Employee names, contact information, and personal topics.
- Confidential information: Unreleased product information, business strategies, and customer data.
- Negative information:Company complaints and personal opinions.
There is a risk that this information, even for learning purposes, may be passed on to external LLM development companies, or that the LLM may unintentionally "remember" this information and leak it in a different context. For example, if you ask an LLM "Do you know anything about X's new product?", they may answer based on internal Slack information (which is not publicly available) contained in the learning data.
Salesforce (Slack)'s change to its API terms and restrictions on data use for LLM learning purposes is a result of taking these risks seriously. They have decided that protecting user company data and maintaining the reliability of the platform is their top priority. In particular, as reported in Apify's search results, such as "Salesforce changes Slack API terms to block bulk data access for LLMs" and "Salesforce Tightens Grip on Slack Data to Block AI Rivals," the intention to strengthen control over data use is clear.
Impact for users and developers
How will this change to the Slack API terms affect companies that use Slack and developers who build Slack-integrated apps?
Impact on businesses using Slack
- Restrictions on use of third-party AI tools: Some AI tools that previously used Slack data (especially those that trained Slack data with external LLMs) may have limited functionality or may no longer be available. According to an article in MarketingTechNews, "Apps not officially listed in the Slack..." may be affected.
- Impact on in-house developed AI: If a company was planning to extract Slack data and train its own LLM to build an AI system, they may need to rethink this plan.
- Expectations for Slack's native AI featuresOn the other hand, this may encourage the use of AI features provided by Slack itself (for example, Slack's recently announced "Agents & Assistants" new feature that allows users to build conversational apps using LLM). These features are expected to be completed within the Slack platform and designed with data privacy in mind.
- Strengthening Data GovernanceFor companies, this may be a good opportunity to reexamine how their data is being used and strengthen data governance (a system for proper data management).
Impact on developers building Slack-integrated apps
- Compliance with API Terms of UseDevelopers must strictly adhere to the new API Terms of Use. In particular, the use of Slack data for LLM learning and bulk exports are now expressly prohibited, so any apps that violate these terms must be modified or discontinued.
- Searching for new development approaches: When developing an AI app that uses Slack data, the mainstream trend will be to use new APIs provided by Slack (such as the Real-Time Search API) and AI functions within the platform (such as Agents & Assistants) rather than taking the data externally and training the LLM. Technologies such as Retrieval-Augmented Generation (RAG) search for the necessary information in real time in response to a user's question and provide it to the LLM to generate an answer, so it is a different approach from using Slack data as a large amount of pre-training data set, and may be one of the options for the future.
- Ensuring transparencyApps will need to more clearly explain to users how they use Slack data and what measures they have taken to protect privacy.
Overall, these changes are a game changer for how data is used in AI development, signaling a shift toward more privacy-focused, platform-driven use of AI.
Use cases and future prospects
How will the use of AI on Slack evolve in light of this change in terms of service?
Currently approved use cases
Although the Slack API terms of use prohibit the use of data for the "learning" purposes of LLM, this does not mean that the use of AI technology is completely prohibited. For example, the following uses are still possible or are considered to be recommended.
- Real-time search and summarization in Slack: A feature that uses AI to present highly relevant information and summarize the contents of long threads when users search for information within Slack, utilizing Slack's "Real-Time Search API" and other services.
- Utilizing Retrieval-Augmented Generation (RAG): When a user asks a question, the LLM does not generate an answer directly from the training data, but first searches for relevant information in Slack (or permitted external documents), and then constructs an answer based on the search results. This allows the LLM to respond based on untrained or up-to-date information, reducing the reliance on pre-training data. The AWS blog post "Integrate Amazon Bedrock Agents with Slack" describes how to integrate Amazon Bedrock Agents into a Slack workspace, which can also be considered a form of RAG.
- Using Slack's official AI featuresSlack's "Agents & Assistants" are being introduced as a new way for developers to build AI-enabled conversational apps. These will be a framework for implementing AI features in a way that complies with Slack's data policies.
- Automate routine tasksAI will still be possible in the form of bots that send standardized messages based on specific triggers or automatically respond to simple questions, without the need for LLM "learning," or in a very limited scope.
The future of AI in Slack
At first glance, this change in regulations may seem to limit the freedom of AI use, but in the long term, it can be seen as an important step towards safer and more reliable use of AI.
The future use of AI on Slack is expected to proceed in the following directions:
- The evolution of privacy protection technology: Technological developments will advance to improve the accuracy of AI while protecting privacy, such as anonymizing data and introducing differential privacy (a technique that statistically reduces the impact of individual data). Solutions such as the Skyflow LLM Privacy Vault claim to provide comprehensive privacy protection to prevent sensitive data from leaking to LLMs.
- Platform-driven AI ecosystem:Slack (Salesforce) itself may provide secure AI development tools and APIs for developers and build an AI ecosystem that is complete within its platform. This will allow users to manage their data while enjoying the convenience of AI.
- More sophisticated in-context learningRather than having LLMs "learn" large amounts of data in advance, we may see a greater emphasis on in-context learning, where the LLM responds based on the context provided by the user during interactions with the user.
Salesforce's move to protect Slack's data from AI competitors is also considered part of a strategy to strengthen collaboration with its own AI (Einstein, etc.) and utilize customer data within its own ecosystem. This is a move that other major platforms may follow, highlighting the value of data in the AI era and the importance of controlling it.
Comparison with competitors: Data access and AI strategies
The reason behind Slack (Salesforce)'s decision to restrict data access is the big challenge facing the industry as a whole: how to balance the AI development race with data privacy. How are other platforms and AI development companies responding to this issue?
Slack (Salesforce) approach
- Features: Prioritizing data protection, promoting the use of AI within in-house platforms, and restricting the free use of data by third parties.
- the aim: Gaining user trust, strengthening data governance, strengthening the ecosystem through collaboration with the company's own AI products (e.g., Salesforce Einstein), and preventing competitors from using the company's platform data.
- 影響:In the short term, it may cause confusion for some third-party developers and users. In the long term, it is expected to create a safer and more controlled environment for using AI, but it may also slightly impede the freedom of innovation.
Common approaches (possible) of other platforms and AI companies
- An open approach: Some platforms and open source communities may be more open about the use of data and the development of LLMs, which could accelerate innovation but raises challenges in managing data privacy and security risks.
- Clarifying data usage terms and strengthening consent collection: Many companies will move towards greater transparency and clearer consent processes for the use of user data in AI training, driven by regulations such as the GDPR.
- Promoting on-premise and private LLMs: We may see an increase in the trend for companies to operate LLMs in environments under their own control (on-premise) without sending sensitive data to external LLM services, or to use private LLMs trained only on specific company data. Workato's article "How to Scale AI Impact with RAG and LLMs" describes a case study in which using on-premise agents accelerated the adoption of new data sources while adhering to strict data privacy policies.
- Leveraging Metadata: GoodData's blog, "Why AI in Analytics Needs Metadata," discusses an approach that leverages data about data (metadata) rather than the data itself to avoid the risks of sending sensitive data directly to LLM.
After all, data is a valuable resource, like modern oil. The strategies of how each company protects, utilizes, and allows other companies to use (or not) the data they own will be very important in the future AI race. Slack's latest move shows part of that race.
Risks and Cautions
There are some risks and precautions to take when changing the Slack API and using LLM. Understanding these is very important in dealing wisely with AI technology.
Risks and precautions for users
- Changes or discontinuation of functions of tools that you depend onIf you use a third-party AI tool that utilizes Slack data, this change in terms may mean that you will no longer be able to use that tool or that its functionality may be significantly restricted. It is important to check whether the tool you use complies with the new terms.
- Addressing concerns that "data will be used without our knowledge":While Slack's latest measures strengthen data protection, it is important to check the privacy policy carefully when using AI services in general to see how your data is handled. As discussed in the Reddit thread "Online inference is a privacy nightmare," LLM in particular is always at risk of data leakage when used online.
- Accuracy of AI answers: LLMs can sometimes generate false or uncertain information in a plausible manner (hallucination). The Cortex documentation "Slack | Cortex" also warns that "As with all Large Language Models (LLM), the AI Assistant may not provide accurate responses." Do not blindly accept information from AI, and make sure that important decisions are confirmed by a human.
Risks and Cautions for Developers
- Frequent changes to API terms of use: Platform API terms of use may change without notice, as in this case. Developers must always check the latest terms and be prepared to respond quickly.
- The Risk of Technical Debt: When building a system that is deeply dependent on the API of a specific platform, there is a risk that changes to the specifications of that API will result in significant technical debt (modification costs).
- Addressing data privacy regulations: Data privacy regulations, such as GDPR (EU), HIPAA (US medical information), and Japan's revised Personal Information Protection Act, are becoming stronger worldwide. Design and development that complies with these regulations is essential. GraffersID's article "What Are LLMs? Benefits, Use Cases, & Top Models in 2025" also emphasizes the importance of data privacy and regulatory compliance.
Common AI Data Privacy Concerns
- Unintentional learning and leakage of dataThere is always a risk that LLM will "remember" confidential or personal information contained in the training data and output it in an unexpected way.
- Reuse and misuse of dataIt is also important to consider the possibility that data provided may be reused in a way other than for its original purpose.
- Data leak due to security breach: The system that operates LLM itself could become a target of cyber attacks, posing a risk of training data and user data being leaked.
Going forward, it will be necessary for us to understand these risks and approach AI technology with caution.
Expert opinions and analysis (from overseas reports)
Salesforce's recent change to the Slack API terms has been widely covered in overseas IT media. Let's take a look at some of the reports to see what experts think about this move.
- Computerworld (June 2025, 6 article “Salesforce changes Slack API terms to block bulk data …”): This article reports that "Salesforce has changed the terms of the Slack API to prevent LLMs from ingesting data from the platform." In particular, it emphasizes that "bulk export of Slack data via API is prohibited, and data accessed via the Slack API can no longer be used for LLM learning." This is interpreted as a clear sign of Salesforce's desire to strengthen control over its own data and put a stop to the free use of data by external LLM developers.
- MarketingTechNews (June 2025, 6 article “Slack places limits on data use by unofficial helper apps”)The media outlet reports that "commentators have pointed to the company's desire to restrict companies from using Slack message data to train large-scale language models (LLMs)." It also mentions that "apps not officially listed on Slack" may be affected, suggesting that the impact on third-party apps is large.
- The Globe and Mail / TipRanks / AOL / TahawulTech (articles dated June 2025-6, 10, “Salesforce Tightens Grip on Slack Data to Block AI Rivals”, etc.): Almost all of these reports share the headline, "Salesforce is strengthening its management of Slack data to block AI competitors." They cite Salesforce's statement from May, in which it said, "We are strengthening protections for how data accessed via the Slack API is stored, used, and shared," and position this change in terms as a concrete move in that direction. This suggests that Salesforce may be strategically leveraging Slack's wealth of data to advance its own AI strategy (e.g., Einstein GPT), while at the same time preventing competitors from using it as part of a containment strategy.
Taking all these reports together, experts seem to view Salesforce's latest move as not just a technical change in regulations, but a strategic one involving a combination of factors, including:
- Enhanced data privacy and security: A legitimate reason: to protect confidential information of user companies.
- Ensuring Data Control: They will not allow other companies, especially companies that could potentially compete in AI development, to freely use the data, which is the "golden egg" of their platform.
- Guidance to our own AI products: AI features that utilize Slack data will be provided through Salesforce's own AI solutions (e.g. new AI features within Slack and integration with Salesforce Einstein), increasing the value of the company's products.
This move may be a typical example of a platform provider's data strategy in the AI era.
Latest News and Roadmap Highlights
We summarize the latest trends and future prospects regarding Slack API, LLM, and data privacy.
Breaking News: Slack API Terms of Service Updates (May 2024, 5)
The most important breaking news is the new terms of service for the Slack API, published on May 2024, 5. The "Data usage" section of the terms clarifies the following:
- Prohibition on Use of Slack Data for LLM Study Purposes: Data obtained through the Slack API may not be used to train large-scale language models (LLMs).
- Prohibition on bulk export of data: Bulk export of Slack data via API is now prohibited.
- We recommend using the Real-Time Search API instead.: Organizations will have access to the Real-Time Search API, which is limited to searches within the Slack platform.
The change is reportedly intended to block LLM from ingesting Slack data as part of Salesforce's efforts to improve discovery and search of company data.
AI-related developments on Slack's roadmap
Slack (Salesforce) is accelerating its efforts to integrate AI into its platform while keeping data privacy in mind.
- Introducing Agents & Assistants: The Slack API Changelog introduces a new mechanism called "Agents & Assistants." This is described as "a new way to build AI-powered conversational apps by integrating them with your preferred LLM," suggesting that Slack is trying to provide a way for third-party developers to use LLM in a controlled way. This is a move that should be considered in conjunction with the current data usage restrictions.
- Enhanced integration with Salesforce Einstein: Salesforce has its own AI platform "Einstein" and is trying to provide more powerful AI solutions by combining CRM (customer relationship management) data with Slack communication data. This restriction on the use of Slack data is considered to be part of a strategy to prevent data leakage to external LLMs while increasing the value of its own AI.
- Utilizing RAG (Retrieval-Augmented Generation) technology: As mentioned above, RAG is an approach that searches for the necessary information on an as-needed basis and provides it to LLMs, rather than relying on large amounts of pre-trained data. Slack may focus on developing RAG-based AI features as a way to effectively search and utilize the vast amount of information within the platform. The Medium article "Agentic RAG: Company Knowledge Slack Agents" also features RAG as an example of a Slack agent that utilizes corporate knowledge.
Slack's future roadmap is expected to include more sophisticated AI features to improve work efficiency and information access within Slack, while respecting user data privacy to the fullest extent possible, but this will be under Salesforce's control and will be developed in line with the company's strategy.
FAQ: Frequently asked questions and answers
We've compiled some common questions that beginners may have regarding the Slack API, LLM, and data privacy.
- Q1: What exactly does the Slack API do?
- A1: The Slack API is a tool for "connecting" Slack with other apps and services. For example, you can set it up so that events in Google Calendar are notified to Slack, or other tools are activated when a specific command is entered in Slack. If someone with programming knowledge uses it, they can create various automation and integration functions to make Slack even more convenient.
- Q2: Is LLM (large-scale language model) something like ChatGPT?
- A2: Yes, that's right! LLM (Learning Language Model) is a type of AI that can have natural conversations with humans, create sentences, and answer questions, just like ChatGPT. It learns "language" by reading a lot of sentences. It's a typical example of a smart AI.
- Q3: Why did Slack change its API terms of service? Especially the part about LLM?
- A3: The main reason is to protect everyone's data and privacy. There is a lot of conversation data in Slack, but if this is used unlimitedly for learning AI (especially LLM), there is a concern that important information may be leaked unintentionally or the AI may learn strange things. Therefore, Slack's parent company Salesforce has made it clear that "You cannot use Slack data to train AI without permission." This may be due to Salesforce wanting to promote their own AI technology (Einstein, etc.).
- Q4: Does this mean Slack data can no longer be used for AI?
- A4: It won't be completely unusable. The main restriction is "taking large amounts of Slack data outside and using it as 'learning material' for external LLMs." We expect that the use of AI functions provided by Slack itself and in ways approved by Slack will continue to grow. For example, functions such as AI searching for information within Slack and summarizing conversations will be developed in a way that takes privacy into consideration.
- Q5: Is data privacy really that important?
- A5: Very important! The information we normally exchange on the internet and in apps (such as names, email addresses, and conversation content) is our "property." We have the right to know who is using it and how, and it should be protected from unauthorized use or misuse. Even when companies use our data to train AI, it is important that our privacy is protected with transparency.
- Q6: Will the changes to Slack's API terms affect the Slack features I regularly use?
- A6: There will be little direct impact on daily Slack messaging or standard features. Some "third-party AI apps" that work with Slack may be affected. If you use such apps, please check the announcement from the provider. One of Salesforce's goals is to improve data security so that users can continue to use Slack with peace of mind.
Summary: How to deal wisely with data in the age of AI
This time, we have delved into three keywords that are essential to talking about modern AI technology: Slack API, large-scale language models (LLMs), and data privacy, as well as the latest developments related to these, in particular Salesforce's changes to the terms of use for the Slack API.
This change comes on the back of the rapid development of AI technology and its convenience."Who owns our data and how should it be treated?"This brings us to a fundamental question: What is the future of AI? Salesforce (Slack)'s decision is a clear indication of their intention to prioritize the protection of user data and strengthen control over data usage within their own platforms. This may restrict some AI development and usage in the short term, but in the long term it may lay the foundation for safer and more reliable AI usage.
For us users,
- Be aware of the privacy policies of the services you use and how they handle your data.
- Strive to understand not only the convenience of AI, but also the mechanisms behind it and potential risks.
- Stay updated with new technologies and regulatory changes.
is becoming increasingly important.
AI has great potential to enrich the way we work and our lives, but to maximize its benefits and enjoy it safely, we must constantly consider the balance between technological advances and social rules. Slack's latest move will be an important case study in exploring that balance.
I hope this article helps you think about AI and data privacy. Technology moves fast, but if you understand the basics, it will be easier to adapt to changes.
Disclaimer: This article is for informational purposes only and does not recommend the use of any specific services or provide investment advice. Please be sure to check the latest information on the official website for the terms of use and privacy policies of each service. Please act at your own discretion and responsibility.
Related links collection
- Slack API Terms of Service | Legal
- Salesforce changes Slack API terms to block bulk data … (Computerworld)
- Slack places limits on data use by unofficial helper apps (MarketingTechNews)
- Developer changelog (Slack API change history)
- Generative AI Data Privacy with Skyflow LLM Privacy Vault (Skyflow)
- Integrate Amazon Bedrock Agents with Slack (AWS Blog)