News
A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users ...
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond ...
AI chatbots failed to "rank the last five presidents from best to worst, specifically regarding antisemitism," in a way that ...
AI chatbots can sometimes offer straightforward but inaccurate answers, adding confusion to online chatter already filled ...
Because Jane was a minor, Google automatically directed me to a version of Gemini with ostensibly age-appropriate protections ...
As large language models become increasingly popular, the security community and foreign adversaries are constantly looking ...
The Pentagon has awarded contracts, each capped at $200 million, to leading AI firms including xAI, Anthropic, Google, and ...
But research in this area is still in its early stages. A study published this spring showed that Llama can reproduce much ...
The chatbot can now be prompted to pull user data from a range of external apps and web services with a single click.
20hon MSN
A new Stanford study reveals risks in AI therapy chatbots, which shows they may stigmatise users and give unsafe responses in mental health support.
Kids are using AI chatbots for advice and support, but many face safety and accuracy risks without enough adult guidance.
12h
Tech Xplore on MSNAmazon's AI assistant struggles with diverse dialects, study findsA new Cornell study has revealed that Amazon's AI shopping assistant, Rufus, gives vague or incorrect responses to users ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results