Faculty Associate Petra Molnar is among the authors of a new report from the Migration + Tech Monitor and the Refugee Law Lab.
Rumman Chowdhury weighs in on this year's 'AI optimism' ads, noting that these ads are at odds with viewers' awareness of overreliance on technology. Chowdhury notes that "the AI ‘optimism at all cost ...
Professor Gabriel Weil will discuss the role that tort law can play in compelling AI companies to internalize the risks ...
At a recent panel convened by the Weatherhead Center for International Affairs, BKC Affiliate Bruce Schneier spoke on the threats and opportunities presented by governments worldwide adopting AI tools ...
Join ASML for the launch of Transparency Hub, a new platform designed to compare the data practices of consumer-facing social and technology applications.
Who is eligible to submit an essay? Only currently enrolled, degree‑seeking Harvard students are eligible. Unfortunately, cross‑registered students from other institutions and Harvard fellows are not ...
How can large language models (LLMs) transform the way lawyers, researchers, and the public interact with the law? Join us for a hands-on conversation about the potential of LLMs to make sense of ...
Affiliate Ram Shankar Siva Kumar and coauthors "present a practical scanner for identifying sleeper agent-style backdoors in causal language models." ...
Dr. Claire Wardle is a leading expert on social media, user generated content, and verification. Her research sits at the increasingly visible and critical intersection of technology, communications ...
Opinion
AI-generated text is overwhelming institutions – setting off a no-win ‘arms race’ with AI detector
Bruce Schneier and Nathan Sanders discuss media outlets being inundated with a high volume of AI-generated text, swamping traditional editorial models.
Trebor Scholz and Mark Esposito provide guidance for building community-owned alternatives to extractive AI systems.
Faculty Associate George Chalhoub is quoted in Fortune, offering a reflection on Moltbook that underscores how large-scale agent-to-agent interaction surfaces systemic vulnerabilities in current AI ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results