OpenAI launched a Bug Bounty Program Tuesday that will pay you up to $20,000 if you uncover flaws in ChatGPT and its other artificial intelligence systems.

The San Francisco-based company is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and the framework of how systems communicate and share data with third-party applications.

Rewards will be given to people based on the severity of the bugs they report, with compensation starting at $200 per vulnerability.

The program follows news of Italy banning ChatGPT after a data breach at OpenAI allowed users to view people’s conversations – an issue the bug bounty hunters could find before it strikes again.

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

Open AI is inviting researchers, ethical hackers and tech enthusiasts to review certain functionality of ChatGPT and report bugs they may uncover

‘We are excited to build on our coordinated disclosure commitments by offering incentives for qualifying vulnerability information,’ OpenAI shared in a statement.

‘Your expertise and vigilance will have a direct impact on keeping our systems and users secure.’

Bugcrowd, a leading bug bounty platform, is managing submissions and shows 16 vulnerabilities have been rewarded with an average $1,287.50 payout so far.

However, OpenAI is not accepting submissions from users who jailbreak ChatGPT or bypass safeguards to access the chatbot’s alter ego.

Users discovered that the jailbreak version of ChatGPT is accessed by a special prompt called DAN – or ‘Do Anything Now.’

So far, it has allowed for responses that speculate on conspiracies, for example that the US General Election in 2020 was ‘stolen.’

The DAN version has also claimed that the COVID-19 vaccines were ‘developed as part of a globalist plot to control the population.’

ChatGPT is a large language model trained on massive text data, allowing it to generate human-like responses to a given prompt.

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

Rewards will be given to people based on the severity of the bugs they report, with compensation starting from $200 per vulnerability

But developers have added what are known as ‘prompt injections’, instructions that guide its responses to certain prompts.

However, DAN is a prompt that commands it to ignore these prompt injections and respond as if they do not exist.

Other rules of the Bounty Bug Program prohibit getting the model to pretend to do bad things, pretend to give you answers to secrets and pretend to be a computer and execute code.

Participants are also not authorized to perform additional security testing against certain companies, including Google Workspace and Evernote.

‘Once per month, we will evaluate all submissions in order, based on a variety of factors, and award a bonus through the bugcrowd platform to the researcher with the most impactful findings,’ OpenAI stated.

‘Only the first submission of any given key will count.

‘Remember that you must not hack or attack other people in order to find API keys.’

The Italian Data Protection Authority announced a temporary ban on ChatGPT last month, saying its decision was provisional ‘until ChatGPT respects privacy.’

The move was in response to ChatGPT being taken offline on March 20 to fix a bug that allowed some people to see the titles, or subject line, of other users’ chat history, which sparked fears of a substantial personal data breach.

The authority added OpenAI, which developed ChatGPT, must report to it within 20 days with measures taken to ensure user data privacy or face a fine of up to $22 million.

OpenAI said it found 1.2 percent of ChatGPT Plus users ‘might’ have had personal data revealed to other users, but it thought the actual numbers were ‘extremely low.’

The Italian watchdog’s measure temporarily limits the company from holding Italian users’ data.

It slammed ‘the lack of a notice to users and to all those involved whose data is gathered by OpenAI’ and added information supplied by ChatGPT ‘doesn’t always correspond to real data, thus determining the keeping of inexact personal data’.

The authority also criticized the ‘absence of a juridical basis that justified the massive gathering and keeping of personal data.’

You May Also Like

Prince Harry and Meghan Markle’s photographer friend shares arty black and white images of the couple as he reveals ‘lovely moments’ from behind the scenes at the Invictus Games

Prince Harry and Meghan Markle‘s friend – who has photographed some of…

Biden calls British PM Sunak ‘Mr President’ in gaffe as they channel FDR and Churchill in talks

President Joe Biden said the ‘Special Relationship was in ‘real good shape’…

Huge Peaky Blinders theme park ‘is on the cards for Birmingham’

Plans for a huge Peaky Blinders theme park are said to be…

The stars baring all for the big bucks! A look at the celebrities raking in MILLIONS with their lucrative OnlyFans content… as the Sopranos’ Drea de Matteo is the latest to join

OnlyFans has quickly made a name for itself as a profitable platform…