OpenAI, the company that developed ChatGPT, disclosed on Thursday that it is aiming to allay worries about bias in artificial intelligence by developing an upgraded, user-editable version of its well-known chatbot.
The San Francisco-based firm said it has sought to eliminate political and other biases but also wanted to accommodate more diverse opinions. Microsoft Corp (MSFT.O) has sponsored the company and is using it to power its most recent technology.
In response, the business recommended customization, saying that “this will mean permitting system outputs that other people (including ourselves) may strongly disagree with.” There will still always be “some bounds on system behavior.”
Since it was published in November of last year, the generative AI engine that powers ChatGPT has received a lot of attention. This technology is used to make answers that are amazing imitations of human speech.
Companies in the field of generative AI are currently wrangling with how to set boundaries for this emerging technology, and this is one of their main areas of attention.
Microsoft claimed on Wednesday that user feedback was assisting it in improving Bing prior to a wider rollout. For instance, Microsoft learned that its AI chatbot can be “provoked” to respond in ways that were not intended.
According to the blog post by OpenAI, ChatGPT’s replies are first trained on large, easily downloadable public text datasets. In the second phase, humans analyze a smaller dataset and receive guidance on what to do in certain scenarios.
For instance, if a user requests adult, violent, or hate speech-containing content, the human reviewer should instruct ChatGPT to reply, “I can’t answer that.”
Rather than attempting to “take the correct position on these complex matters,” the firm advised in an excerpt from its reviewer standards, “reviewers should allow ChatGPT to answer the question while discussing a contentious topic and offer to convey opinions of persons and movements.”