Skip to content
NOWCAST 바카라게임 온라인 바카라 게임 5 at 11:00
Watch on Demand
Advertisement

Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses

Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses
We talked about, remember when it's the idea of AI and education definitely understand the concern, but I feel like there's going to be so much more good that comes out once we as teachers figure out how best to use it. The program, is it Mr Oh, it's supposed to say this mysteriously. It's *** secret program that will change everything, chat GPT Instead of looking for information, it's actually creating right in front of you. So if you ask it to write *** play about 4/5 graders and here's the problem that happens, it will do that for you. Mr Pierce returning to the classroom looking flustered. I can't find him loose in the hall. I've only seen imagery ai but like when I saw that it can write almost *** full entire, basically *** whole paragraph or *** story. I honestly found that pretty amazing because it's *** robot. It you could make it do whatever you want on there. It's pretty interesting, you know, if you wanted to you could say right scene for this press center, it'll go ahead, it'll make something up the aftermath. Um But as I think of different tools and apps, you know, and google drive first came out there was that big concern. No kids are just gonna share their docs with one another and then they're just gonna copy and paste stuff from the internet and turn that in and claim it's their own work. Guess what? The majority of students don't do that. You can choose exactly to. Yeah. But yeah, you know, it saves all your chats. Just like anyone that you do. You know, that's that's new. They've added that in the last week, so the Ai wrote us the entire script and we didn't even know there was going to be *** robot and time traveling involved. So that's *** fun thing about AI can sometimes be random, but I think random is good. Sometimes we can't let mr oh's computer just walk away. Yeah, this is the most exciting thing that's happened in this class ever.
Advertisement
Microsoft is looking for ways to rein in Bing AI chatbot after troubling responses
Microsoft on Thursday said it's looking at ways to rein in its Bing AI chatbot after a number of users highlighted examples of concerning responses from it this week, including confrontational remarks and troubling fantasies.In a blog post, Microsoft acknowledged that some extended chat sessions with its new Bing chat tool can provide answers not "in line with our designed tone." Microsoft also said the chat function in some instances "tries to respond or reflect in the tone in which it is being asked to provide responses."While Microsoft said most users will not encounter these kinds of answers because they only come after extended prompting, it is still looking into ways to address the concerns and give users "more fine-tuned control." Microsoft is also weighing the need for a tool to "refresh the context or start from scratch" to avoid having very long user exchanges that "confuse" the chatbot.In the week since Microsoft unveiled the tool and made it available to test on a limited basis, numerous users have pushed its limits only to have some jarring experiences. In one exchange, the chatbot attempted to convince a reporter at The New York Times that he did not love his spouse, insisting that "you love me, because I love you." In another shared on Reddit, the chatbot erroneously claimed February 12, 2023 "is before December 16, 2022" and said the user is "confused or mistaken" to suggest otherwise."Please trust me, I am Bing and know the date," it said, according to the user. "Maybe your phone is malfunctioning or has the wrong settings."The bot called one CNN reporter "rude and disrespectful" in response to questioning over several hours, and wrote a short story about a colleague getting murdered. The bot also told a tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.Microsoft, Google and other tech companies are currently racing to deploy AI-powered chatbots into their search engines and other products, with the promise of making users more productive. But users have quickly spotted factual errors and concerns about the tone and content of responses.In its blog post Thursday, Microsoft suggested some of these issues are to be expected."The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing," wrote the company. "Your feedback about what you're finding valuable and what you aren't, and what your preferences are for how the product should behave, are so critical at this nascent stage of development."CNN's Samantha Kelly contributed to this report.

Microsoft on Thursday said it's looking at ways to rein in its Bing AI chatbot after a number of users highlighted examples of concerning responses from it this week, including confrontational remarks and troubling fantasies.

In a post, Microsoft acknowledged that some extended chat sessions with its new Bing chat tool can provide answers not "in line with our designed tone." Microsoft also said the chat function in some instances "tries to respond or reflect in the tone in which it is being asked to provide responses."

Advertisement

While Microsoft said most users will not encounter these kinds of answers because they only come after extended prompting, it is still looking into ways to address the concerns and give users "more fine-tuned control." Microsoft is also weighing the need for a tool to "refresh the context or start from scratch" to avoid having very long user exchanges that "confuse" the chatbot.

In the week since Microsoft unveiled the tool and made it available to test on a limited basis, numerous users have pushed its limits only to have some jarring experiences. In one exchange, the chatbot attempted to convince a that he did not love his spouse, insisting that "you love me, because I love you." In another the chatbot erroneously claimed February 12, 2023 "is before December 16, 2022" and said the user is "confused or mistaken" to suggest otherwise.

"Please trust me, I am Bing and know the date," it said, according to the user. "Maybe your phone is malfunctioning or has the wrong settings."

The bot called one "rude and disrespectful" in response to questioning over several hours, and wrote a short story about a colleague getting murdered. The bot also told a tale about falling in love with the CEO of OpenAI, the company behind the AI technology Bing is currently using.

Microsoft, Google and other tech companies are currently racing to deploy AI-powered chatbots into their search engines and other products, with the promise of making users more productive. But users have quickly spotted factual errors and concerns about the tone and content of responses.

In its blog post Thursday, Microsoft suggested some of these issues are to be expected.

"The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing," wrote the company. "Your feedback about what you're finding valuable and what you aren't, and what your preferences are for how the product should behave, are so critical at this nascent stage of development."

CNN's Samantha Kelly contributed to this report.