Friday, February 23, 2024
HomeBusinessAI might be reading your Slack, Teams messages using tech from Aware

AI might be reading your Slack, Teams messages using tech from Aware

Please indicate reference to George Orwell.

Depending on where you work, there’s a good chance that AI is analyzing your messages in Slack, Microsoft Teams, Zoom and other popular applications.

Large American employers such as Walmart, Delta Airlines, T Mobile, Chevron and starbucksas well as European brands such as Nestlé and AstraZenecahave turned to a seven-year-old startup, Aware, to monitor conversations among their bases, according to the company.

Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual report or twice. survey per year.

Using the anonymized data in Aware’s analytics product, clients can see how employees in a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, non-compliance, pornography, nudity and other behaviors, she said.

According to Schumann, Aware’s analytics tool, which monitors employee sentiment and toxicity, does not have the ability to flag the names of individual employees. But its standalone eDiscovery tool can do so in case of extreme threats or other risk behaviors predetermined by the customer, she added.

CNBC did not hear back from Walmart, T-Mobile, Chevron, Starbucks or Nestlé about their use of Aware. An AstraZeneca representative said the company uses the eDiscovery product but does not use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery to monitor trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal record retention on its social media platform.

You don’t have to be a dystopian novel enthusiast to see where it could all go wrong.

Generative AI is coming to wealth management in a big way, says Ritholtz's Josh Brown

Jutta Williams, co-founder of Humane Intelligence, a nonprofit dedicated to AI accountability, said AI adds a new and potentially problematic aspect to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially in email communications.

Speaking generally about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC, “A lot of this becomes thought crime.” He added: “This is treating people like inventory in a way I’ve never seen before.”

Employee surveillance AI is a fast-growing but niche piece of a larger AI market that exploded last year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became in the buzzword for corporate earnings calls, and some form of Technology is automating tasks in almost every industry, from financial services and biomedical research to logistics, online travel and utilities.

Aware’s revenue has grown 150% annually on average over the past five years, Schumann told CNBC, and its typical customer has about 30,000 employees. Major competitors include Qualtrics, Relativity, Proofpoint, Smarsh, and Netskope.

By industry standards, Aware remains fairly lean. The company last raised money in 2021, when it raised $60 million in a round led by Goldman Sachs Asset Management. Compare that to large language modeling, or LLM, companies like OpenAI and Anthropic, which have raised billions of dollars each, largely from strategic partners.

‘Real-time toxicity monitoring’

Schumann founded the company in 2017 after spending almost eight years working in collaborative business at insurance company Nationwide.

Before that, he was a businessman. And Aware is not the first company he has started that has provoked thoughts about Orwell.

In 2005, Schumann founded a company called BigBrotherLite.com. According to its LinkedIn profile, the company developed software that “enhanced the digital and mobile viewing experience” of the CBS reality series “Big Brother.” In Orwell’s classic novel “1984,” Big Brother was the leader of a totalitarian state in which citizens were under perpetual surveillance.

“I built a simple player focused on a cleaner, easier consumer experience for people to watch the TV show on their computer,” Schumann said in an email.

In Aware, you’re doing something very different.

Each year, the company publishes a report that aggregates information on billions (in 2023, the figure was 6.5 billion) of messages sent between large companies, tabulating perceived risk factors and workplace sentiment scores. Schumann refers to the trillions of messages sent through workplace communication platforms each year as “the fastest growing set of unstructured data in the world.”

By including other types of content that are shared, such as images and videos, Aware’s analytical AI analyzes more than 100 million pieces of content every day. In doing so, the technology creates a social graph of the company, noting which teams internally talk to each other more than others.

“It always tracks employee sentiment in real time and it always tracks toxicity in real time,” Schumann said of the analytics tool. “If you were a bank using Aware and workforce sentiment spiked in the last 20 minutes, it’s because they’re talking about something positive, collectively. The technology could tell you anything.”

Aware confirmed to CNBC that it uses data from its enterprise customers to train its machine learning models. The company’s data repository contains about 6.5 billion messages, representing about 20 billion individual interactions between more than 3 million unique employees, the company said.

When a new customer signs up for the analytics tool, it takes about two weeks for Aware’s AI models to train on employee messages and learn about emotion and sentiment patterns within the company so they can see what’s normal and what’s normal. It’s abnormal, Schumann said.

“It won’t have people’s names, to protect privacy,” Schumann said. Rather, he said, clients will see that “perhaps the over-40 workforce in this part of the United States is viewing the changes to (a) policy very negatively because of the cost, but everyone else outside of that age group and location sees it as positive because it impacts them in a different way.”

FTC Examines Megacaps AI Deals

But Aware’s eDiscovery tool works differently. A company can configure role-based access to employee names depending on the “extreme risk” category the company chooses, which instructs Aware’s technology to extract an individual’s name, in certain cases, to human resources or other company representative.

“Some of the most common are extreme violence, extreme intimidation and harassment, but they vary by industry,” Schumann said, adding that suspected insider trading would be tracked in financial services.

For example, a customer can specify a “violent threats” policy, or any other category, using Aware’s technology, Schumann said, and have AI models monitor violations in Slack. microsoft Equipment and workplace Goal. The client could also combine that with rule-based flags for certain phrases, statements, and more. If the AI ​​found something that violated a company’s specific policies, it could provide the employee’s name to the customer’s designated representative.

This type of practice has been used for years in email communications. What is new is the use of AI and its application in workplace messaging platforms such as Slack and Teams.

Amba Kak, executive director of the AI ​​Now Institute at New York University, is concerned about the use of AI to help determine what is considered risky behavior.

“It results in a chilling effect on what people say in the workplace,” Kak said, adding that the Federal Trade Commission, the Justice Department and the Equal Employment Opportunity Commission have expressed concerns about the matter, although he did not specifically talk about Aware technology. “These are both issues of workers’ rights and privacy.”

Schumann said that while Aware’s eDiscovery tool allows human resources or security investigations teams to use AI to search through massive amounts of data, a “similar but basic capability” already exists today in Slack, Teams and other platforms.

“A key distinction here is that Aware and its AI models do not make decisions,” Schumann said. “Our AI simply makes it easier to analyze this new set of data to identify potential risks or policy violations.”

Privacy Concerns

Even if data is aggregated or anonymized, research suggests, it is a misconception. A landmark data privacy study using 1990 US Census data showed that 87% of Americans could be identified solely by ZIP code, date of birth, and gender. Aware customers using its analytics tool have the power to add metadata to message tracking, such as the employee’s age, location, division, seniority, or job function.

“What they’re saying is based on a very outdated and, I would say, completely discredited notion at this point that anonymization or aggregation is like a panacea to solve privacy concerns,” Kak said.

Additionally, the type of AI model Aware uses can be effective at generating inferences from aggregated data, making accurate guesses, for example, about personal identifiers based on language, context, slang terms, and more, according to recent research.

“No company is essentially in a position to offer blanket guarantees about the privacy and security of LLMs and these types of systems,” Kak said. “There is no one who can seriously say that these challenges are solved.”

And what about the employee resource? If an interaction is flagged and a worker is disciplined or fired, it’s difficult for them to offer a defense if they’re not aware of all the facts involved, Williams said.

“How do you face your accuser when we know that the explainability of AI is still immature?” Williams said.

Schumann responded: “None of our AI models make decisions or recommendations about employee discipline.”

“When the model flags an interaction,” Schumann said, “it provides full context about what happened and what policy it triggered, giving research teams the information they need to decide next steps consistent with company policies and the law”.

LOOK: AI ‘is really at play here’ with recent tech layoffs

The AI
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments