| Welcome to Global Village Space

Wednesday, May 29, 2024

ChatGPT fever spreads to US workplace, sounding alarm for some

Companies worldwide are considering how to best make use of ChatGPT, a chatbot programme that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.

Many workers across the US are turning to ChatGPT to help with basic tasks, a Reuters/Ipsos poll found, despite fears that have led employers such as Microsoft and Google to curb its use.

Companies worldwide are considering how to best make use of ChatGPT, a chatbot programme that uses generative AI to hold conversations with users and answer myriad prompts. Security firms and companies have raised concerns, however, that it could result in intellectual property and strategy leaks.

Read more: FTC examines OpenAI’s ChatGPT for generating false information

Anecdotal examples of people using ChatGPT to help with their day-to-day work include drafting emails, summarising documents and doing preliminary research.

Some 28 per cent of respondents to the online poll on artificial intelligence (AI) between July 11 and 17 said they regularly use ChatGPT at work, while only 22 per cent said their employers explicitly allowed such external tools.

The Reuters/Ipsos poll of 2,625 adults across the United States had a credibility interval, a measure of precision, of about 2 percentage points.

Some 10 per cent of those polled said their bosses explicitly banned external AI tools, while about 25 per cent did not know if their company permitted use of the technology.

ChatGPT became the fastest-growing app in history after its launch in November. It has created both excitement and alarm, bringing its developer OpenAI into conflict with regulators, particularly in Europe, where the company’s mass data-collecting has drawn criticism from privacy watchdogs.

Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence AI could reproduce data it absorbed during training, creating a potential risk for proprietary information.

“People do not understand how the data is used when they use generative AI services,” said Ben King, VP of customer trust at corporate security firm Okta OKTA.O.

“For businesses, this is critical because users don’t have a contract with many AIs – because they are a free service – so corporates won’t have run the risk through their usual assessment process,” King said.

OpenAI declined to comment when asked about the implications of individual employees using ChatGPT but highlighted a recent company blog post assuring corporate partners that their data would not be used to train the chatbot further unless they gave explicit permission.

When people use Google’s Bard it collects data such as text, location, and other usage information. The company allows users to delete past activity from their accounts and request that content fed into the AI be removed. Alphabet-owned GOOGL.O Google declined to comment when asked for further detail.

Read more: In first, Pakistani Judge consults ChatGPT in rape case

Microsoft MSFT.O did not immediately respond to a request for comment.

HARMLESS TASKS

A US-based employee of Tinder said workers at the dating app used ChatGPT for “harmless tasks” like writing emails even though the company does not officially allow it.

“It’s regular emails. Very non-consequential, like making funny calendar invites for team events, farewell emails when someone is leaving … We also use it for general research,” said the employee, who declined to be named because they were not authorized to speak with reporters.

The employee said Tinder has a “no ChatGPT rule” but that employees still use it in a “generic way that doesn’t reveal anything about us being at Tinder”.

Reuters was not able independently confirm how employees at Tinder were using ChatGPT. Tinder said it provided “regular guidance to employees on best security and data practices”.

In May, Samsung Electronics banned staff globally from using ChatGPT and similar AI tools after discovering an employee had uploaded sensitive code to the platform.

“We are reviewing measures to create a secure environment for generative AI usage that enhances employees’ productivity and efficiency,” Samsung said in a statement on August 3.

“However, until these measures are ready, we are temporarily restricting the use of generative AI through company devices.”

Reuters reported in June that Alphabet had cautioned employees about how they use chatbots including Google’s Bard, at the same time as it markets the programme globally.

Google said although Bard can make undesired code suggestions, it helps programmers. It also said it aimed to be transparent about the limitations of its technology.